Go: cmd/go: add package version support to Go toolchain

Created on 7 Mar 2018  ·  242Comments  ·  Source: golang/go

proposal: add package version support to Go toolchain

It is long past time to add versions to the working vocabulary of both Go developers and our tools.
The linked proposal describes a way to do that. See especially the Rationale section for a discussion of alternatives.

This GitHub issue is for discussion about the substance of the proposal.

Other references:

Proposal Proposal-Accepted modules

Most helpful comment

This proposal has been open with active discussions for over two months: @rsc & @spf13 have conducted feedback sessions and gathered valuable input from the community that has resulted in revisions to the proposal. @rsc has also held weekly meetings with @sdboyer in order to gain further feedback. There has been valuable feedback provided on the proposal that has resulted in additional revisions. Increasingly this feedback is on the accompanying implementation rather than the proposal. After considerable review we feel that it is time to accept this proposal and let Go’s broad ecosystem of tool implementers begin making critical adjustments so our user base can have the best possible experience.

There have been two objections of this proposal which we feel we should speak to:

  1. The proposal will require people to change some of their practices around using and releasing libraries.
  2. The proposal fails to provide a technical solution to all possible scenarios that might arise involving incompatibilities.

These are accurate in their observation but working as intended. Authors and users of code _will_ have to change some of their practices around using and releasing libraries, just as developers have adapted to other details of Go, such as running gofmt. Shifting best practices is sometimes the right solution. Similarly, vgo need not handle all possible situations involving incompatibilities. As Russ pointed out in his recent talk at Gophercon Singapore, the only permanent solution to incompatibility is to work together to correct the incompatibility and maintain the Go package ecosystem. Temporary workarounds in a tool like vgo or dep need only work long enough to give developers time to solve the real problem, and vgo does this job well enough.

We appreciate all of the feedback and passion you have brought to this critical issue. The proposal has been accepted.

— The Go Proposal Review Committee

All 242 comments

Frequently Asked Questions

This issue comment answers the most frequently asked questions, whether from the discussion below or from other discussions. Other questions from the discussion are in the next issue comment.

Why is the proposal not “use Dep”?

At the start of the journey that led to this proposal, almost two years ago, we all believed the answer would be to follow the package versioning approach exemplified by Ruby's Bundler and then Rust's Cargo: tagged semantic versions, a hand-edited dependency constraint file known as a manifest, a separate machine-generated transitive dependency description known as a lock file, a version solver to compute a lock file satisfying the manifest, and repositories as the unit of versioning. Dep follows this rough plan almost exactly and was originally intended to serve as the model for go command integration. However, the more I understood the details of the Bundler/Cargo/Dep approach and what they would mean for Go, especially built into the go command, and the more I discussed those details with others on the Go team, a few of the details seemed less and less a good fit for Go. The proposal adjusts those details in the hope of shipping a system that is easier for developers to understand and to use. See the proposal's rationale section for more about the specific details we wanted to change, and also the blog post announcing the proposal.

Why must major version numbers appear in import paths?

To follow the import compatibility rule, which dramatically simplifies the rest of the system. See also the blog post announcing the proposal, which talks more about the motivation and justification for the import compatibility rule.

Why are major versions v0, v1 omitted from import paths?

v1 is omitted from import paths for two reasons. First, many developers will create packages that never make a breaking change once they reach v1, which is something we've encouraged from the start. We don't believe all those developers should be forced to have an explicit v1 when they may have no intention of ever releasing v2. The v1 becomes just noise. If those developers do eventually create a v2, the extra precision kicks in then, to distinguish from the default, v1. There are good arguments about visible stability for putting the v1 everywhere, and if we were designing a system from scratch, maybe that would make it a close call. But the weight of existing code tips the balance strongly in favor of omitting v1.

v0 is omitted from import paths because - according to semver - there are no compatibility guarantees at all for those versions. Requiring an explicit v0 element would do little to ensure compatibility; you'd have to say v0.1.2 to be completely precise, updating all import paths on every update of the library. That seems like overkill. Instead we hope that developers will simply look at the list of modules they depend on and be appropriately wary of any v0.x.y versions they find.

This has the effect of not distinguishing v0 from v1 in import paths, but usually v0 is a sequence of breaking changes leading to v1, so it makes sense to treat v1 as the final step in that breaking sequence, not something that needs distinguishing from v0. As @Merovius put it (https://github.com/golang/go/issues/24301#issuecomment-376213693):

By using v0.x, you are accepting that v0.(x+1) might force you to fix your code. Why is it a problem if v0.(x+1) is called v1.0 instead?

Finally, omitting the major versions v0 and v1 is mandatory - not optional - so that there is a single canonical import path for each package.

Why must I create a new branch for v2 instead of continuing to work on master?

You don't have to create a new branch. The vgo modules post unfortunately gives that impression in its discussion of the "major branch" repository layout. But vgo doesn't care about branches. It only looks up tags and resolves which specific commits they point at. If you develop v1 on master, you decide you are completely done with v1, and you want to start making v2 commits on master, that's fine: start tagging master with v2.x.y tags. But note that some of your users will keep using v1, and you may occasionally want to issue a minor v1 bug fix. You might at least want to fork a new v1 branch for that work at the point where you start using master for v2.

Won't minimal version selection keep developers from getting important updates?

This is a common fear, but I really think if anything the opposite will happen. Quoting the "Upgrade Speed" section of https://research.swtch.com/vgo-mvs:

Given that minimal version selection takes the minimum allowed version of each dependency, it's easy to think that this would lead to use of very old copies of packages, which in turn might lead to unnecessary bugs or security problems. In practice, however, I think the opposite will happen, because the minimum allowed version is the maximum of all the constraints, so the one lever of control made available to all modules in a build is the ability to force the use of a newer version of a dependency than would otherwise be used. I expect that users of minimal version selection will end up with programs that are almost as up-to-date as their friends using more aggressive systems like Cargo.

For example, suppose you are writing a program that depends on a handful of other modules, all of which depend on some very common module, like gopkg.in/yaml.v2. Your program's build will use the newest YAML version among the ones requested by your module and that handful of dependencies. Even just one conscientious dependency can force your build to update many other dependencies. This is the opposite of the Kubernetes Go client problem I mentioned earlier.

If anything, minimal version selection would instead suffer the opposite problem, that this “max of the minimums” answer serves as a ratchet that forces dependencies forward too quickly. But I think in practice dependencies will move forward at just the right speed, which ends up being just the right amount slower than Cargo and friends.

By "right amount slower" I was referring to the key property that upgrades happen only when you ask for them, not when you haven't. That means that code only changes (in potentially unexpected and breaking ways) when you are expecting that to happen and ready to test it, debug it, and so on.

See also the response https://github.com/golang/go/issues/24301#issuecomment-375992900 by @Merovius.

If $GOPATH is deprecated, where does downloaded code live?

Code you check out and work on and modify can be stored anywhere in your file system, just like with essentially every other developer tool.

Vgo does need some space to hold downloaded source code and install binaries, and for that it does still use $GOPATH, which as of Go 1.9 defaults to $HOME/go. So developers will never need to set $GOPATH unless they want these files to be in a different directory. To change just the binary install location, they can set $GOBIN (as always).

Why are you introducing the // import comment?

We're not. That was a pre-existing convention. The point of that example in the tour was to show how go.mod can deduce the right module paths from import comments, if they exist. Once all projects use go.mod files, import comments will be completely redundant and probably deprecated.

Discussion Summary (last updated 2017-04-25)

This issue comment holds a summary of the discussion below.

How can we handle migration?

[https://github.com/golang/go/issues/24301#issuecomment-374739116 by @ChrisHines.]

Response https://github.com/golang/go/issues/24301#issuecomment-377529520 by @rsc. The original proposal assumes the migration is handled by authors moving to subdirectories when compatibility is important to them, but of course that motivation is wrong. Compatibility is most important to users, who have little influence on authors moving. And it doesn't help older versions. The linked comment, now also #25069, proposes a minimal change to old "go build" to be able to consume and build module-aware code.

How can we deal with singleton registrations?

[https://github.com/golang/go/issues/24301#issuecomment-374791885 by @jimmyfrasche.]

Response https://github.com/golang/go/issues/24301#issuecomment-377527249 by @rsc. Singleton registration collisions (such as http.Handle of the same path) between completely different modules are unaffected by the proposal. For collisions between different major versions of a single module, authors can write the different major versions to expect to coordinate, usually by making v1 call into v2, and then use a requirement cycle to make sure v2 is not used with older v1 that don't know about the coordination.

How should we install a versioned command?

[https://github.com/golang/go/issues/24301#issuecomment-375106068 by @leonklingele.]

Response https://github.com/golang/go/issues/24301#issuecomment-377417565 by @rsc. In short, use go get. We still use $GOPATH/bin for the install location. Remember that $GOPATH now defaults to $HOME/go, so commands will end up in $HOME/go/bin, and $GOBIN can override that.

Why are v0, v1 omitted in the import paths? Why must the others appear? Why must v0, v1 never appear?

[https://github.com/golang/go/issues/24301#issuecomment-374818326 by @justinian.]
[https://github.com/golang/go/issues/24301#issuecomment-374831822 by @jayschwa.]
[https://github.com/golang/go/issues/24301#issuecomment-375437150 by @mrkanister.]
[https://github.com/golang/go/issues/24301#issuecomment-376093912 by @mrkanister.]
[https://github.com/golang/go/issues/24301#issuecomment-376135447 by @kaikuehne.]
[https://github.com/golang/go/issues/24301#issuecomment-376141888 by @kaikuehne.]
[https://github.com/golang/go/issues/24301#issuecomment-376213693 by @Merovius.]
[https://github.com/golang/go/issues/24301#issuecomment-376247926 by @kaikuehne.]

Added to FAQ above.

Why are zip files mentioned in the proposal?

[https://github.com/golang/go/issues/24301#issuecomment-374839409 by @nightlyone.]

The ecosystem will benefit from defining a concrete interchange format. That will enable proxies and other tooling. At the same time, we're abandoning direct use of version control (see rationale at top of this post). Both of this motivate describing the specific format. Most developers will not need to think about zip files at all; no developers will need to look inside them, unless they're building something like godoc.org.

See also #24057 about zip vs tar.

Doesn't putting major versions in import paths violate DRY?

[https://github.com/golang/go/issues/24301#issuecomment-374831822 by @jayschwa.]

No, because an import's semantics should be understandable without reference to the go.mod file. The go.mod file is only specifying finer detail. See the second half of the semantic import versions section of the proposal, starting at the block quote.

Also, if you DRY too much you end up with fragile systems. Redundancy can be a good thing. So "violat[ing] DRY" - that is to say, limited repeating yourself - is not always bad. For example we put the package clause in every .go file in the directory, not just one. That caught honest mistakes early on and later turned into an easy way to distinguish external test packages (package x vs package x_test). There's a balance to be struck.

Which timezone is used for the timestamp in pseudo-versions?

[https://github.com/golang/go/issues/24301#issuecomment-374882685 by @tpng.]

UTC. Note also that you never have to type a pseudo-version yourself. You can type a git commit hash (or hash prefix) and vgo will compute and substitute the appropriate pseudo-version.

Will vgo address non-Go dependencies, like C or protocol buffers? Generated code?

[https://github.com/golang/go/issues/24301#issuecomment-374907338 by @AlexRouSg.]
[https://github.com/golang/go/issues/24301#issuecomment-376606788 by @stevvooe.]
[https://github.com/golang/go/issues/24301#issuecomment-377186949 by @nim-nim.]

Non-Go development continues to be a non-goal of the go command, so there won't be support for managing C libraries and such, nor will there be explicit support for protocol buffers.

That said, we certainly do understand that using protocol buffers with Go is too difficult, and we'd like to see that addressed separately.

As for generated code more generally, a real cross-language build system is the answer, specifically because we don't want every user to need to have the right generators installed. Better for the author to run the generators and check in the result.

Won't minimal version selection keep developers from getting important updates?

[https://github.com/golang/go/issues/24301#issuecomment-375090551 by @TocarIP.]
[https://github.com/golang/go/issues/24301#issuecomment-375985244 by @nim-nim.]
[https://github.com/golang/go/issues/24301#issuecomment-375992900 by @Merovius.]

Added to FAQ.

### Can I use master to develop v1 and then reuse it to develop v2?

[https://github.com/golang/go/issues/24301#issuecomment-375248753 by @mrkanister.]
[https://github.com/golang/go/issues/24301#issuecomment-375989173 by @aarondl.]

Yes. Added to FAQ.

What is the timeline for this?

[https://github.com/golang/go/issues/24301#issuecomment-375415904 by @flibustenet.]

Response in https://github.com/golang/go/issues/24301#issuecomment-377413777 by @rsc. In short, the goal is to land a "technology preview" in Go 1.11; work may continue a few weeks into the freeze but not further. Probably don't send PRs adding go.mod to every library you can find until the proposal is marked accepted and the development copy of cmd/go has been updated.

How can I make a backwards-incompatible security change?

[https://github.com/golang/go/issues/24301#issuecomment-376236546 by @buro9.]

Response in https://github.com/golang/go/issues/24301#issuecomment-377415652 by @rsc. In short, the Go 1 compatibility guidelines do allow breaking changes for security reasons to avoid bumping the major version, but it's always best to do so in a way that keeps existing code working as much as possible. For example, don't remove a function. Instead, make the function panic or log.Fatal only if called improperly.

If one repo holds different modules in subdirectories (say, v2, v3, v4), can vgo mix and match from different commits?

[https://github.com/golang/go/issues/24301#issuecomment-376266648 by @jimmyfrasche.]
[https://github.com/golang/go/issues/24301#issuecomment-376270750 by @AlexRouSg.]

Yes. It treats each version tag as corresponding only to one subtree of the overall repository, and it can use a different tag (and therefore different commit) for each decision.

What if projects misuse semver? Should we allow minor versions in import paths?

[https://github.com/golang/go/issues/24301#issuecomment-376640804 by @pbx0.]
[https://github.com/golang/go/issues/24301#issuecomment-376645212 by @powerman.]
[https://github.com/golang/go/issues/24301#issuecomment-376650153 by @pbx0.]
[https://github.com/golang/go/issues/24301#issuecomment-376660236 by @powerman.]

As @powerman notes, we definitely need to provide an API consistency checker so that projects at least can be told when they are about to release an obviously breaking change.

Can you determine if you have more than one package in a build?

[https://github.com/golang/go/issues/24301#issuecomment-376640804 by @pbx0.]

The easiest thing to do would be to use goversion -m on the resulting binary. We should make a go option to show the same thing without building the binary.

Concerns about vgo reliance on proxy vs vendor, especially open source vs enterprise.

[https://github.com/golang/go/issues/24301#issuecomment-376925845 by @joeshaw.]
[https://github.com/golang/go/issues/24301#issuecomment-376936614 by @kardianos.]
[https://github.com/golang/go/issues/24301#issuecomment-376947621 by @Merovius.]
[https://github.com/golang/go/issues/24301#issuecomment-376979054 by @joeshaw.]
[https://github.com/golang/go/issues/24301#issuecomment-376988873 by @jamiethermo.]
[https://github.com/golang/go/issues/24301#issuecomment-377134575 by @Merovius.]

Response: [https://github.com/golang/go/issues/24301#issuecomment-377411175 by @rsc.] Proxy and vendor will both be supported. Proxy is very important to enterprise, and vendor is very important to open source. We also want to build a reliable mirror network, but only once vgo becomes go.

Concerns about protobuild depending on GOPATH semantics.

[https://github.com/golang/go/issues/24301#issuecomment-377601170 by @stevvooe.]

Response [https://github.com/golang/go/issues/24301#issuecomment-377602765 by @rsc] asked for more details in a new issue, but that issue does not seem to have been filed.

Suggestion to add special vgo-v1-lock tag.

[https://github.com/golang/go/issues/24301#issuecomment-377662150 by @kybin.]

It seems appealing at first but leads to special cases that are probably not worth taking on. Full response in https://github.com/golang/go/issues/24301#issuecomment-384344659.

How does one patch a deep dependency without vendoring?

[https://github.com/golang/go/issues/24301#issuecomment-378255833 by @chirino.]

Response [https://github.com/golang/go/issues/24301#issuecomment-378261916 by @kardianos.] By using a replace directive.

What will we do about module names changing?

[https://github.com/golang/go/issues/24301#issuecomment-379020339 by @jimmyfrasche.]

Response [https://github.com/golang/go/issues/24301#issuecomment-379307176 by @rsc.]
These are real, pre-existing problems that the vgo proposal does not attempt to address directly, but clearly we should address them eventually. The answer to code disappearing is to have caching proxies (mirrors) along with a reason to trust them; that's future work. (Or use vendoring in top-level project if you prefer.) The answer to code moving is to add an explicit concept of module or package redirects, much like type aliases are type redirects; that's also future work.

What about connecting to local Git servers?

[https://github.com/golang/go/issues/24301#issuecomment-383168012 by @korya.]

Direct git access has been added back to the plan. See #24915.

What about binary-only packages?

[https://github.com/golang/go/issues/24301#issuecomment-382793364 by @sdwarwick.]

Binary-only packages have only ever been supported in the limited circumstance of some kind of out-of-band installation into GOPATH/pkg. Go get has never supported fetching and installing a binary-only package, and it will continue not to support that. A binary-only package only works with one particular compiler and one particular copy of the dependencies, which severely limits how well it can be supported at all. The right answer is almost always to use source code instead.

Should we use path@version syntax in the go.mod file?

[https://github.com/golang/go/issues/24301#issuecomment-382791513 by @sdwarwick.]

This is #24119. It seemed like a good idea at first but, after careful consideration, no.

.

.

Change https://golang.org/cl/101678 mentions this issue: design: add 24301-versioned-go

This proposal is impressive and I like most everything about it. However, I posted the following concern on the mailing list, but never received any replies. In the meantime I've seen this issue raised by others in the Gophers slack channel for vgo and haven't seen a satisfactory answer there either.

From: https://groups.google.com/d/msg/golang-dev/Plc42fslQEk/rlfeNlazAgAJ

I am most worried about the migration path between a pre-vgo world and a vgo world going badly. I think we risk inflicting major pain on the Go community if there isn't a smooth migration path. Clearly the migration cannot be atomic across the whole community, but if I've understood all that you've written about vgo so far, there may be some situations where existing widely used packages will not be usable by both pre-vgo tools and post-vgo tools.

Specifically, I believe that existing packages that already have tagged releases with major versions >= 2 will not work with vgo until they have a go.mod file and also are imported with a /vN augmented import path. However, once those changes are made to the repository it will break pre-vgo uses of the package.

This seems to create a different kind of diamond import problem in which the two sibling packages in the middle of the diamond import a common v2+ package. I'm concerned that the sibling packages must adopt vgo import paths atomically to prevent the package at the top of the diamond from being in an unbuildable state whether it's using vgo or pre-vgo tools.

I haven't seen anything yet that explains the migration path in this scenario.

The proposal states:

Module-aware builds can import non-module-aware packages (those outside a tree with a go.mod file) provided they are tagged with a v0 or v1 semantic version. They can also refer to any specific commit using a “pseudo-version” of the form v0.0.0-yyyymmddhhmmss-commit. The pseudo-version form allows referring to untagged commits as well as commits that are tagged with semantic versions at v2 or above but that do not follow the semantic import versioning convention.

But I don't see a way for non-module-aware packages to import module-aware packages with transitive dependencies >= v2. That seems to cause ecosystem fragmentation in a way not yet addressed. Once you have a module-aware dependency that has a package >= v2 somewhere in its transitive dependencies that seems to force all its dependents to also adopt vgo to keep the build working.

Update: see also https://github.com/golang/go/issues/24454

The Go project has encouraged this convention from the start of the project, but this proposal gives it more teeth: upgrades by package users will succeed or fail only to the extent that package authors follow the import compatibility rule.

It is unclear to me what this means and how it changes from the current situation. It would seem to me, that this describes the current situation as well: If I break this rule, upgrades and go-get will fail. AIUI nothing really changes and I'd suggest removing at least the mention of "more teeth". Unless, of course, this paragraph is meant to imply that there are additional mechanisms in place to penalize/prevent breakages?

This would also affect things like database drivers and image formats that register themselves with another package during init, since multiple major versions of the same package can end up doing this. It's unclear to me what all the repercussions of that would be.

If the major version is v0 or v1, then the version number element must be omitted; otherwise it must be included.

Why is this? In the linked post, I only see the rationale that this is what developers currently do to create alternate paths when they make breaking changes - but this is a workaround for the fact that they don't initially plan for the tooling not handling versions for them. If we're switching to a new practice, why not allow and encourage (or even mandate) that new vgo-enabled packages include v0 or v1? It seems like paths lacking versions are just opportunities for confusion. (Is this a vgo-style package? Where is the module boundary? etc.)

I generally like the proposal, but am hung up on requiring major versions in import paths:

  1. It violates the DRY principle when the major version can already be known from the go.mod. Understanding what will happen if there's a mismatch between the two is also hard to intuit.
  2. The irregularity of allowing v0 and v1 to be absent is also unintuitive.
  3. Changing all the import paths when upgrading a dependency seems potentially tedious.

I understand that scenarios like the moauth example need to be workable, but hopefully not at the expense of keeping things simple for more common scenarios.

First of all: Impressive work!

One thing that is totally unclear to me and seems a bit underspecified:

Why there is a zip files in this proposal?

Layout, constraints and multiple use cases like when it is created and how it's life cycle is managed, what tools need support, how tools like linters should interact with it are also unclear, because they are not covered in the proposal.

So I would suggest to either refer to a later, still unwritten, proposal here and remove the word zip or remove the whole part from the proposal text, if you plan not discuss it at all within the scope of this proposal.

Discussing this later also enables a different audiences to contribute better here.

Which timezone is used for the timestamp in the pseudo-version (v0.0.0-yyyymmddhhmmss-commit)?

Edit:
It is in UTC as stated in https://research.swtch.com/vgo-module.

@rsc Will you be addressing C dependencies?

Looks like Minimal version selection makes propagation of non-breaking changes very slow. Suppose we have a popular library Foo, which is used by projects A,B and C. Someone improves Foo performance without changing API. Currently receiving updates is an opt-out process. If project A vendored Foo, but B and C didn't, author only needs to send pr with update to vendored dependency to A. So non-api breaking contributions won't have as much effect on community and are somewhat discouraged compared to current situation. This is even more problematic for security updates. If some abandoned/small/not very active project (not library) declares direct dependency on old version of e. g. x/crypto all users of that project will be vulnerable to flaw in x/crypto until project is updated, potentially forever. Currently users of such projects will receive latest fixed version, so this makes security situation worse. IIRC there were some suggestions how to fix this in maillist discussion, but, as far as I can tell this proposal doesn't mention it.

IIRC there were some suggestions how to fix [getting security patches] in maillist discussion, but, as far as I can tell this proposal doesn't mention it.

See the mention of go get -p.

See the mention of go get -p.

I've seen it, but this is still an opt-in mechanism.
I was thinking of way for library to mark all previous releases as unsafe, to force user to run go get -p or explicitly opt-in into insecure library.

If support for go get as we know it today will be deprecated and eventually removed, what's the recommended way to fetch & install (untagged) Go binaries then? Does it require git clone'ing the project first, followed by a manual go install to install the binary?
If $GOPATH is deprecated, where will these binaries be installed to?

@leonklingele: from my understanding, go get will not be deprecated, on the contrary.
It will be enhanced with automatic and transparent versioning capabilities. If a project depends from an untagged project, it would just take the master and "vendor" it at this exact version.
Again, my own understanding from reading just a little bit about vgo. I'm still in the process of understanding it completely.

I wonder how this will affect the flow of working with a Git repository in general, also building on this sentence from the proposal:

If the major version is v0 or v1, then the version number element must be omitted; otherwise it must be included.

At the moment, it seems common to work on master (for me this includes short-lived feature branches) and to tag a commit with a new version every now and then. I feel this workflow is made more confusing with Go modules as soon as I release v2 of my library, because now I have a master and a v2 branch. I would expect master to be the current branch and v2 to be a maintenance branch, but it is exactly the other way around.

I know that the default branch can be changed from master to v2, but this still leaves me with the task to update that every time I release a new major version. Personally, I would rather have a master and a v1 branch, but I am not sure how exactly this would fit the proposal.

New major releases cause churn. If you have to change one setting in your Git repository (the default branch) whenever you make a new release, that’s a very minor cost compared to your library’s users switching to the new version.

I think this aspect of the proposal sets the right incentive: it encourages upstream authors to think about how they can do changes in a backwards-compatible way, reducing overall ecosystem churn.

now I have a master and a v2 branch

You can instead create a v2/ subdirectory in master.

@mrkanister

I would rather have a master and a v1 branch, but I am not sure how exactly this would fit the proposal.

According to my understanding of https://research.swtch.com/vgo-module vgo uses tags not branches to identify the versions. So you can keep development on master and branch off v1 as long as the tags point to the correct branch and commit.

New major releases cause churn. If you have to change one setting in your Git repository (the default branch) whenever you make a new release, that’s a very minor cost compared to your library’s users switching to the new version.

This is a problematic style of thinking that I think has bitten Go hard in the past. For one person on one project, switching what branch is default is simple in the moment, yes. But going against workflow conventions will mean people forget, especially when they work in several languages. And it will be one more quirky example of how Go does things totally differently that newcomers have to learn. Going against common programmer workflow conventions is _not at all_ a minor cost.

Going against common programmer workflow conventions is not at all a minor cost.

Not following the conventional path is sometimes the necessary condition for innovation.

If I understood parts of the proposal correctly, you never have to create a subdirectory or a new branch. You can potentially have only a master branch and git tag your repo from 0.0, to 1.0, to 2.0 and so on as long as you make sure to update your go.module to the correct import path for your library.

@mrkanister I think, for dev, your clone your master (or any dev branch) and use "replace" directive (see vgo-tour) to point to it. (if i understand what you mean, no sure).

@rsc I'd like to ask you to be more precise about the road map and what we should do now.
Will it follow the Go policy and feature freeze vgo at 3 month (2 now) ?
Should we now go with our pilgrim's baton asking every libs maintainer to add a go.mod file or should we wait for the proposal to be officially accepted (to be sure that name and syntax will not change) ?

@flibustenet Tools are not covered by the 1.0 policy so anything can change.

https://golang.org/doc/go1compat

Finally, the Go toolchain (compilers, linkers, build tools, and so on) is under active development and may change behavior. This means, for instance, that scripts that depend on the location and properties of the tools may be broken by a point release.

Also from the proposal

The plan, subject to proposal approval, is to release module support in Go 1.11 as an optional feature that may still change. The Go 1.11 release will give users a chance to use modules “for real” and provide critical feedback. Even though the details may change, future releases will be able to consume Go 1.11-compatible source trees. For example, Go 1.12 will understand how to consume the Go 1.11 go.mod file syntax, even if by then the file syntax or even the file name has changed. In a later release (say, Go 1.12), we will declare the module support completed. In a later release (say, Go 1.13), we will end support for go get of non-modules. Support for working in GOPATH will continue indefinitely.

Thanks for the feedback.

@AlexRouSg

According to my understanding of https://research.swtch.com/vgo-module vgo uses tags not branches to identify the versions. So you can keep development on master and branch off v1 as long as the tags point to the correct branch and commit.

You are correct, this will continue to work as before (just double checked to be sure), good catch!

With that out of the way, the thing that I (and apparently others) don't understand is the reasoning behind disallowing a v1 package to exist. I tried to import one using /v1 at the end of the import and also adding that to the go.mod of the package being imported, but vgo will look for a folder named v1 instead.

@mrkanister
I think the main reason for not allowing v1 or v0 in the import path is to ensure that there is only one import path for each compatible version of a package.
Using the plain import path instead of /v1 is to ease the transition, so you don't have to update all your import paths to add /v1 at the end.

Hi,

While a lot of the points in the proposal are more than welcome and will help taming the large Go codebases that emerged over time the "use minimal version" rule is quite harmful:

  • you want your code ecosystem to progress. That means you want people testing and using new versions and detect problems early before they accumulate.
  • you want new module releases, that fix security problems, to be applied as soon as possible
  • you want to be able to apply new module releases, that fix security problems, as soon as possible. They are not always tagged at security fixes. If you avoid new releases you also avoid those fixes
  • even when a new release does not contain security fixes applying its changes early means there will be less changes to vet when the next release that does contain security fixes is published (and the last thing you want when such a release is published and you need to be quick is to be bogged down in intermediary changes you didn't look at before).
  • applying intermediary releases is only harmful if they break compat, and they shouldn't break compat, and if they do break compat better to detect it and tell the module authors before they make it an habit for the next releases you'll eventually will absolutely need.
  • you do not want old bits of code to drag you down because they still specify an ancient dependency version and not one finds the time to update their manifest. Using the latest version of a major release serves this social need in other code ecosystems: force devs to test the latest version and not postpone till it's too late because “there are more important” (ie more fun) things to do.

    • while in theory, you can ship a limitless number of module versions so every piece of code can use the one it wants, in practice as soon as you compose two modules that use the same dep you have to choose a version so the more complex your software is, the less you'll tolerate multiple versions. So you soon hit the old problem of what to do with stragglers that slow down the whole convoy. I never met a human culture that managed this problem by telling stragglers "you're right, go as slow as you want, everyone will wait for you". It might be nice and altruistic but it's not productive.

Fighting human inertia is hard and painful, and we're fighting it because it is required to progress not because it is pleasant. Making pleasant tools that avoid the problem and incite humans to procrastinate some more is not helpful at all it will only accelerate project sedimentation and technical debt accumulation. There are already dozens of Go projects on github with most of their readme devoted to the author begging his users to upgrade because he made important fixes, defaulting to the oldest release will generalize the problem.

A good rule would be "use the latest release that matches the major release, not every intermediary commit". That would be a compromise going forward and stability. It puts the original project in command, that knows the codebase best, and can decide sanely when to switch its users to a new code state.

My unanswered question copied from mailing list:

We expect that most developers will prefer to follow the usual “major branch” convention, in which different major versions live in different branches. In this case, the root directory in a v2 branch would have a go.mod indicating v2, like this:

It seems like there's subdirectories and this major branch convention that are both supported by vgo. In my anecdotal experience no repositories follow this convention in Go or other languages (can't actually think of a single one other than the ones forced to by gopkg.in which seems relatively unused these days). Master branch is whatever latest is and has v2.3.4 tags in it's history. Tags exist to separate everything (not just minor versions). If it's necessary to patch an old version, a branch is temporarily created off the last v1 tag, commits pushed, a new tag pushed, and the branch summarily deleted. There is no branch for versions, it's just current master/dev/feature branches + version tags. I know that "everything is a ref" in Git, but for other VCS the distinction may not be as fuzzy.

Having said that, I've tested the above described workflow with vgo (just having tags that say v2.0.0, v2.0.1 and no branches) and it does seem to work. So my question is: Although this works now, is it intended? As it doesn't seem as thoroughly described as the other two workflows in the blog, and I want to ensure that working without a v2/v3... branch is not accidental functionality that will disappear since as I explained above I've never seen this (or the other) described workflow in the post to be massively adopted by anyone (especially outside the Go community).

Of course my argument is coming down to preference and anecdotes, so I'd be willing to do some repo-scraping to prove this across all languages if needed. So far I've really liked the proposal posts and am generally on board with the changes, will continue to follow along and play with vgo.

Thanks for all your efforts.

Can someone maybe clarify how the proposed alternative model to MVS would work to improve upgrade-cadence? Because it isn't clear to me. My understanding of the alternative (widely used) model is

  • Developer creates handcrafted manifest, listing version constraints for all used dependencies
  • Developer runs $solver, that creates a lockfile, listing some chosen subset of transitive dependency versions that satisfy the specified constraints
  • This lockfile gets committed and is used at build and install time to guarantee reproducible builds
  • When a new version of a dependency is released and to be used, developer potentially updates the manifest, reruns the solver and recommits the new lockfile

The proposed MVS model as I understand it is

  • Developer autogenerates go.mod, based on the set of import paths in the module, selecting the currently newest version of any transitive dependency
  • go.mod gets committed and is used to get lower bounds on versions at build and install time. MVS guarantees reproducible builds
  • When a new version of a dependency is released and to be used, developer runs vgo get -u, which fetches the newest versions of transitive dependencies and overwrites go.mod with the new lower bounds. That then gets submitted.

It seems I must grossly overlook something and it would be helpful if someone would point out what. Because this understanding seems to imply that due lockfiles specifying exact versions and those being used in the actual build, that MVS is better at increasing upgrade-cadence - as it doesn't allow holding back versions, in general.

Clearly I'm missing something (and will feel stupid in about 5m), what is that?

@tpng

Using the plain import path instead of /v1 is to ease the transition, so you don't have to update all your import paths to add /v1 at the end.

This should actually not be necessary. Let me give an example:

A user is currently using e.g. v1.0.0 of a library, pinned by a dependency manager and the tag in the upstream repository. Now upstream decides to create a go.mod and also calls the module /v1. This should result in a new commit and a new tag (e.g. v1.0.1). Since vgo will never attempt to update dependencies on its own, this should not break anything for the user, but he/she can update consciously by also changing the import path (pr vgo can do that for him/her).

I think the main reason for not allowing v1 or v0 in the import path is to ensure that there is only one import path for each compatible version of a package.

Yes, I guess I can indeed see that point to not confuse new users of a library.

If the major version is v0 or v1, then the version number element must be omitted; otherwise it must be included.

Can someone please explain the reasoning behind this? What do I do, as a user of a library, when I don't want to use v1 yet, because it introduced a breaking change, which would be totally fine with semantic versioning (new major version)?

I'd prefer to only omit the version prior to version 1, which indicates an unstable/unfinished package. Starting from version 1, I want to be able to rely on the fact that I get a stable release. Omitting the v1 in the import path is confusing, because you don't know if you're tracking an ever-changing library or a stable version. I think that this also doesn't work well with the semantic versionining scheme, where v1 pinpoints the first stable release, which is used to clearly distinguish that version from versions 0.x.

@kaikuehne

What do I do, as a user of a library, when I don't want to use v1 yet, because it introduced a breaking change, which would be totally fine with semantic versioning (new major version)?

As far as I understood, vgo will never update dependencies on its own, not even to a patch version, but instead leave this as a conscious decision for you to make. So if you depend on v0.4.5 of a library (which has a tag for it), you can theoretically keep using that forever. You will also be able to pin the version manually in your go.mod file.

What if another dependency I use depends on the same package, but version v1? Both packages are imported using the same import path. Isn't there a conflict when compiling those into the same binary?

If we'd require v1 to also be part of the import path, both would be treated as different packages, which they kind of are.

@kaikuehne then it would update to the minimal common version that works. (to my understanding)

@kaikuehne I don't understand your reasoning. You are using v0, so presumably you are fine with breaking changes; why would it be a problem if v1 breaks, given that you are already using a version that has no stability guarantee? Furthermore, say instead of going from v0.1->v1.0 with a breaking change, upstream would add the break to v0.2 and then release a (non-breaking) v1.0. That would seem to be well within the expectations around semantic versions, but would amount to exactly the same amount of toil for you (no matter the package manager used). I.e. I really don't understand how "the author suddenly stopped breaking their API" constitutes a problem that is not caused by usage of a pre-1.0 release.

To put it in other words: By using v0.x, you are accepting that v0.(x+1) might force you to fix your code. Why is it a problem if v0.(x+1) is called v1.0 instead?

On discussion point 4: Explicitly adopting the import compatibility rule and "the new package must be backwards compatible with the old package"...

In my case I have a security package https://github.com/microcosm-cc/bluemonday and I recently (late last year) had a scenario in which I learned that a public func was fundamentally not fit for purpose. So I removed it.

Under this proposal, if I removed the func it would bump the version and the insecure/unsafe code would never be removed.

To avoid that I would probably instead log.Fatal() within the func I wished to remove, purely to ensure that existing code did not use an insecure endpoint but to preserve compatibility.

Given that neither of these are ideal... how do we foresee security fixes that require a public func to be hard deprecated being handled? (if not by surprising the developer with a runtime log.Fatal()?)

For those wanting to see the commit for this: https://github.com/microcosm-cc/bluemonday/commit/a5d7ef6b249a7c01e66856b585a359970f03502c

@Merovius Thanks for the clarification when using versions 0.x. Like you said, there are no guarantees to rely on when using a version prior to 1.0 and that the versions 0.x are a special case in semver. If I understood correctly, the outlined rules actually do not apply to versions 0.x at all. My question is just if it would make sense to reflect this distinction in the code as well -- especially when naming packages. For example, if a package only imported packages without a version, you could see at a glance that it is built upon unstable code. If all imports contain a version v1, you see that it uses stable versions.

@buro9 The proposal suggests roughly following the go1 compatibility guarantees, which contain exemptions for security-related API breakages.

@kaikuehne Thanks, that clarifies your concern.

There's an interaction of features that I'm concerned about (assuming I understand things correctly).

If you have a module M that uses reified versions (literal vN directories in the source, not synthetic import path elements derived from tags), and you're building a program P that relies on multiple major versions of M transitively, won't that have to violate minimal version selection in some scenarios?

That is, say P depends on major versions 2, 3, and 4 of M. For each major version of M, there's a minimal complete version specified. Since the versions of M share source for the express purpose of being able to do things like transparently use the same type definition with type aliases, then only one copy of M can be included for all three major versions instead of one copy per major version. Any choice of complete version for one major version fixes the choice for the complete version of the other two major versions and could lead to selecting a non-minimal version for one or both of the other major versions.

How does vgo handle this? Could this cause any problems other than just sometimes being slightly less than minimal? (Like it being possible to accidentally construct a set of modules that yield no solution or cause the solver to loop?)

@jimmyfrasche

If you are using major version directories, vgo will still use tags and then only get the matching version folder of that tag. e.g. you're depending on versions 2, 3, 4. vgo will checkout tag v2.n.m, v3.n.m, v4.n.m
And then from tag v2.n.m only get the v2 folder and so on. So in the end everything is still following tags.

I asked a question in this mailing list post, but haven't seen a concrete answer. I'll repeat it here:

what will happen with non-Go resources, such as protobufs or c files? Apologies if this was already answered in your posts, but we do use the vendor path to distribute and set the import path for protobuf files. While we would compiled Go pacakges against pre-compiled protobuf output, we also have to consider the case of compiling new Go packages from protobuf files dependent on vendored dependencies (ie references rpc.Status from another protobuf file). I can provide some more concrete examples if that description is too dense.

Specifically, I have a project called protobuild that allows one to map protobuf file import to the GOPATH locations. I am not seeing how this proposal will handle resolution of resources in the GOPATH and how we can map that into other compiler import spaces.

This was a huge pain point for working with protobufs for us and this project alleviated a lot of those problems. It would be a shame if this was completely incompatible with the proposal.

Again, apologies if there is something I've missed.

I love the proposal. I only worry that too many projects frequently introduce small breaking changes between minor versions and reserve major version bumps only for very large breaking changes. If minor version breaking changes are frequent, then diamond dependency issues will arise as it violates the assumption of SIV. I wonder if including the minor version in the import statement would help more projects comply with the non-breaking changes contract. I think downsides are extra import churn and updating minor versions gets more difficult (and more explicit).

This brings up another question I have: is it easy (today) with vgo to determine if you have more then 1 version of a package in a build? While allowing two versions of the same package is crucial to moving forward sometimes, it seems like it most projects would want to avoid this unless temporary due to possible unforeseen side-effects of init. Having an easy way to check this might be useful and some projects may want to enforce it during code check in.

I only worry that too many projects frequently introduce small breaking changes between minor versions and reserve major version bumps only for very large breaking changes.

Breaking change is a breaking change. It can't be small or large. I suppose such packages eventually will be rejected by community as too unreliable and not following semver. Even if they'll follow semver and quickly get to large major number like 42.x.x they anyway will be very inconvenient to use. So, if you wanna make a lot of breaking changes then just keep using major number 0. This way it'll continue to work just like current go get, with same issues. If you wanna experiment with API after releasing non-0 major version - move these experiments to separate "experimental" package with forever major 0 and increment main package's major when you done with "small breaking changes" and finally get next stable API.

Including minor in import path violate semver ideology and doesn't provide any new features over including major.

While I understand that that is what authors should do given the the semantics of semver, my fear is that minor breaking changes among minor versions are a common and tempting case for someone already past version 1. In my anecdotal experience, package authors don't follow the exact semantics of semver and often companies reserve the major versions bumps for marketing purposes. So my appeal is not about what is ideal, but about if it may be practical to make certain changes to better deal with the messy human reality that is semver.

Maybe its even possible to optionally include the minor version for packages not obeying semver, maybe that screws up the entire proposal, I'd need to think through it more. I'm not proposing this is a better way only that I am interested in further exploring the trade-offs of either optionally including or mandating the inclusion of the minor version in imports.

Its likely possible to generate better data here from the go corpus (and a tool that looks for obvious breaking changes) to determine how often semver is say mostly followed among existing popular projects that use semver tags.

vgo proposal allows major versions to gracefully break API changes. It (by itself) does nothing to prevent them.

vgo proposal allows one major version to refer to another major version, allowing graceful code re-use. It does nothing to force that.

vgo MVS proposal allows updating to newer packages. It does not force you to update.


Here are things we can build with much greater ease in a vgo world:

  1. Tooling to catch API braking changes before pushing to a module host. Each version is in a zip and can by compared without many different vcs tools and commands.
  2. Security issue registry. Host along side of module hosting for value add. Tools can query them similar to go list either manually or automatically and get notified of issues filtered by a query.

vgo makes these easier by:

  • Constraining the problem-space (work only with zip files, no need for storing arbitrary git/hg/svn history to do API comparisons).

    • Defining the problem-space (MVS, version definition)

  • Tooling the build portion.

vgo operates on one main principle: a single set of inputs should always build the same thing. If we are relying on random rebuilds to catch security updates we are doing it wrong. As a maintainer of projects, some Go projects just run for years without a rebuild. I agree that timely security updates are important: vgo satisfies the build part of this need.

We shouldn't confuse what vgo allows with what vgo is.

@pbx0 I'm not sure is this a right place for such a discussion, maybe maillist is more suitable. In any case, breaking changes without changing major number happens. Even occasionally. Semver has this case answered in FAQ:

As soon as you realize that you’ve broken the Semantic Versioning spec, fix the problem and release a new minor version that corrects the problem and restores backwards compatibility.

But I think there is a place for improvement here. I suppose it should be ease enough to implement automated API checker (maybe one even already exists), which will take all packages listed on godoc.org and periodically check for new versions, and in case new version detected check is new version compatible with previous version's API in case it uses semver tags and major wasn't changed. Then try to contact author in case of incompatible change detected - maybe auto-open issues on github or use email from github account. This won't cover all possible cases, but may be very helpful for community.

I firmly believe in using paths to scope major versions and wanted to voice my support, FWIW.

A year ago, I believed that putting versions in import paths like this was ugly, undesirable, and probably avoidable. But over the past year, I've come to understand just how much clarity and simplicity they bring to the system.

Nearly twenty years ago I needed a VCS for my startup and tried about ten different ones. I immediately ruled out Perforce because it exposed branches using directories. So the v2 branch would just be all the same code, but under a v2 folder. I hated this idea. But after running pilots with the other nine, I found that I often wanted to have both versions of a branch on my local file system, and it turned out that Perforce made that trivially easy whereas the others made it surprisingly complicated. The file system is our containment metaphor. Let's use it.

From the proposal:

Define a URL schema for fetching Go modules from proxies, used both for installing modules using custom domain names and also when the $GOPROXY environment variable is set. The latter allows companies and individuals to send all module download requests through a proxy for security, availability, or other reasons.

I've mentioned this elsewhere, but my biggest concern with the proposal is that it seems to discard (or at least doesn't address) one of the greatest strengths of vendoring today: always available code and perfectly reproducible builds. I don't think the proposed proxy system adequate addresses or replaces these strengths.

A project that uses vendoring today requires only the Go toolchain and Git (or another version control system). There exists only one single point of failure: the git repository. Once the code is checked out, it can be built and rebuilt perfectly without needing to touch the network again -- an important security consideration. For most of us a vendoring project requires no additional infrastructure -- just your local computer and a free GitHub account.

vgo's reliance on proxies introduces a non-trivial infrastructure overhead that I believe isn't sufficiently emphasized. New infrastructure must be provisioned, implemented, maintained and monitored. This may be reasonable for software organizations, but is a burden for individuals and decentralized open source projects. Moreover, things like CI are often outsourced to third parties like Travis CI, which don't run in the same infrastructure as the company's development environment. This makes it harder to reuse proxy infrastructure.

Other languages allow the use of caching proxies. Python is the one I have the most experience with, and in my experience it is rarely used due to the overhead of setting one up. Maybe the Go project can make this simpler, but if it's not the default we will find it used less often and the availability of Go builds in the wild will decline substantially. The impact of the left-pad NPM event is a great case study in this respect.

If the module and proxy system allows us to check in dependencies alongside our own project's code (similar to vendoring today, but not necessarily the same implementation) and the vgo proxy implementation can use it, that would address my concerns. But if that's the intention, I think it needs to be addressed much more fully in the proposal.

@joeshaw You will continue to be able to use a vendor directory to create a self contained build.

From the proposal:

Disallow use of vendor directories, except in one limited use: a vendor directory at the top of the file tree of the top-level module being built is still applied to the build, to continue to allow self-contained application repositories. (Ignoring other vendor directories ensures that Go returns to builds in which each import path has the same meaning throughout the build and establishes that only one copy of a package with a given import path is used in a given build.)

A project that uses vendoring today requires only the Go toolchain and Git (or another version control system). There exists only one single point of failure: the git repository.

Today, if you are hosting packages under your own domain, you need to a) host the git-repository and b) serve the <meta> tags necessary for go-get to find it. In the future, you need to a) serve the .zip and .json files necessary for vgo to fetch the code. There seem to be fewer points of failures and less infrastructure needed for pure hosting. Of course you still need to host the repository for development, but not only does that bring us at worst to the same level as before, the repository itself also requires far less scalable and reliable hosting, as it's only used by people actually developing.

So, for vanity imports, there doesn't appear to be much of a difference in terms of overhead.

If, OTOH, you are not using vanity imports, you are disregarding all the infrastructure github is running for you. So this doesn't seem to be an apples-to-apples comparison: The only way you come out on top is because you let other people solve the hard problems. But there is nothing preventing you from doing that in the future. AIUI Microsoft and others have already volunteered to invest engineering hours and serving capacity into this task. I'd expect that in the future, the effort of hosting Go packages will be roughly lower bounded by the effort of hosting npm packages or gems or crates: You have a github repository and then click a "make this available" button on some centrally managed service to host the zips.

vgo's reliance on proxies

Personally, I dislike the word "proxy" for what the required infrastructure does. "Mirror" seems more appropriate. The mirror may be implemented as a proxy, but it can also be just a bunch of files served by a webserver of your choice, hidden behind cloudflare for approximately infinite scalability and uptime.

Other languages allow the use of caching proxies.

I'd argue other languages serve as a perfectly fine model for why this isn't really a problem. They tend to rely on centralized hosting for their packages to host packages - vgo does not only support this model exceedingly well, it also makes it optional (so you get all the advantages at none of the disadvantages) and very simple to implement, scale and operate.


IMO, if you directly compare what is happening before and after and in Go and in other languages, it should be clear that there is a lot of 1:1 equivalencies. And the only reason it feels like it would be more effort in the future, is because we take the existing infrastructure for granted and see that the new infrastructure doesn't exist yet. But I don't think we have good reasons to doubt that it will.

So, for vanity imports, there doesn't appear to be much of a difference in terms of overhead.

True, but the _vast_ minority of packages use vanity domains, so I don't think this is a strong counterpoint. (I also have other practical issues with vanity domains, which is that their availability tends to be _much worse_ than just using GitHub.)

If, OTOH, you are not using vanity imports, you are disregarding all the infrastructure github is running for you.

Yes, exactly! I feel that this makes my point for me. You get all that wonderful work on infrastructure and maintaining uptime for free or at a reasonable cost. More importantly, it doesn't involve any of your own time.

If GitHub ended up running a vgo-compatible mirror, then maybe this is less of a concern, though I like the elegance of a single Git fetch -- an atomic action from the user's perspective -- containing all the code I need to build a project.

I'd argue other languages serve as a perfectly fine model for why this isn't really a problem. They tend to rely on centralized hosting for their packages to host packages - vgo does not only support this model exceedingly well, it also makes it optional (so you get all the advantages at none of the disadvantages) and very simple to implement, scale and operate.

The problem is that this adds single points of failure (SPOFs) to building a project. The code is going to live in a VCS no matter what (and probably GitHub), so that's one SPOF. In other languages, a centralized repository is a second SPOF. For Go, every import path is an additional SPOF (github.com, golang.org, honnef.co, rsc.io, etc.) and increasing SPOFs lowers overall availability.

Running a mirror may reduce that back to two, sure, but it's infrastructure that I argue doesn't need to exist. Vendoring your deps (or having a local on-disk mirror) reduces this back to just one: your VCS.

In any case, this may be a moot point. I didn't originally understand the part of the proposal about the top-level vendor directory which would seem to solve my main concerns -- thanks for pointing that out, @kardianos -- though a clean break from the old vendoring system might be nicer.

I'm glad that "top-level vendor" will still be a supported configuration because I love git grep so much.

Running a mirror may reduce that back to two, sure, but it's infrastructure that I argue doesn't need to exist.

Enterprise developer here. Maybe we're not the target demographic, but we'd like to use Go. I can't see that happening without us having a mirror inside our enterprise, as we do for java and javascript, to ensure that anything we build today can be built tomorrow.

Enterprise developer here. Maybe we're not the target demographic, but we'd like to use Go. I can't see that happening without us having a mirror inside our enterprise, as we do for java and javascript, to ensure that anything we build today can be built tomorrow.

@jamiethermo Does vendoring not address that for you today?

Support for proxies or mirrors is fine, and if that helps Go adoption I'm all for it. My concern was mirrors as a replacement for local, self-contained source trees (what we call vendoring today).

@joeshaw

Does vendoring not address that for you today?

Well, I must confess ignorance, and a certain amount of frustration with "how should I organize my code". By "self-contained source tree", do you mean that I have a giant git repo that contains all my code, plus all the code in the vendor tree? So I don't need a mirror because I've checked all that other code into my own repo?

With regard to "how should I organize my code", this page, How to Write Go Code, instructs on directory structure, but makes no mention of vendoring.

Adopt semantic import versioning, in which each major version has a distinct import path. Specifically, an import path contains a module path, a version number, and the the path to a specific package inside the module. If the major version is v0 or v1, then the version number element must be omitted; otherwise it must be included.

The packages imported as my/thing/sub/pkg, my/thing/v2/sub/pkg, and my/thing/v3/sub/pkg come from major versions v1, v2, and v3 of the module my/thing, but the build treats them simply as three different packages.

my/thing/sub/pkg can also be v0.

I followed the tour. The example is built outside the GOPATH tree, and it works, whereas dep init barfs in that situation. I work with, I think, 80 teams now, and it'd be really nice not to have to have one giant tree. (As I was typing this, someone walked up with their laptop and a frowny face, and "I've got this really weird bug...")

@joeshaw

You get all that wonderful work on infrastructure and maintaining uptime for free or at a reasonable cost.

I phrased that poorly. I meant to say "if you claim that you used to need less infrastructure if you are not using vanity imports, you are disregarding all the infrastructure github runs for you". I meant to point out that the infrastructure is still there and still needs to run. And it can continue to be run for you in the future just as well.

So your concern would be addressed in what I consider the most likely scenario: That a funded body (currently Microsoft is shaping up to be that) is doing the hosting of the zip-files.

though I like the elegance of a single Git fetch -- an atomic action from the user's perspective

vgo get is just as much an atomic action and it does less work, of the same nature (HTTP requests) that is easier to understand.

The problem is that this adds single points of failure (SPOFs) to building a project.

Where is my last comment incorrect, then? Because it seems pretty obvious to me that the number of points of failures is, if anything, less, they are simpler, easier to scale and cheaper to run.

In other languages, a centralized repository is a second SPOF. For Go, every import path is an additional SPOF (github.com, golang.org, honnef.co, rsc.io, etc.) and increasing SPOFs lowers overall availability.

I think this is oversimplified. If you host your code on a mirror which goes down, then you can still develop (github is still up) and users may still be able to install (by using a different mirror). The host of one of your dependencies going down doesn't matter, as you mirror your dependencies. If you don't trust the "centralized" community-run mirror to stay up, you can run your own or pay someone to do it, with no difference to the users, giving you full control over how much you are affected by downtime of the centralized repo.

None of the things you mention actually is a SPOF, as you still get at least degraded operation. Add to that, that the actual mirrors are comparatively trivial to operate (as they are just stateless HTTP servers) and I'm unconvinced that reliability and availability would meaningfully decrease.

And FTR, right now your git-hosting is a SPOF: If your git-host is down, users can't install and you can't develop.

Running a mirror may reduce that back to two, sure, but it's infrastructure that I argue doesn't need to exist.

And my (poorly phrased) point above was, that it already does :) You are simply a) ignoring that it happens right now and b) assuming that it won't happen in the future :)

To give a different perspective:

  • we are building many Go projects withing our system-wide cross-language software component manager (rpm/dnf on Fedora, Centos and RHEL)
  • it allows us use do much of the same tricks as the vgo proposal, by playing on the rpm layer component namespace (typically, renaming a project at the rpm namespace level from project to compat-project-x to allow distinguishing between incompatible versions of the same import path, exactly like vgo will do).
  • these tricks are definitely useful in helping complex Go projects build
  • though it is not as complete and robust as doing it at the language level
  • we'd rather relay language constrains to rpm/dnf than add a rpm/dnf overlay of constrains over original code.
  • we are quite sick of all the filesystem workarounds currently needed to convince go tools to look at Go project sources. The physical GOPATH won't be missed at all.

Therefore we are thrilled about vgo and hope it will be deployed soon (we needed it years ago).

That being said our use of vgo-like workflows exposed the following difficulties.

The vgo proposal congratulates itself on having made makefiles unnecessary. This is not quite true.

  • a lot of Go projects have fallen in love with autogenerated go files and there is no standard way in go to demand to regenerate those or to express what the generation needs to be successful
  • it is so bad that many projects have to ship with pregenerated files (sometimes making up complete separate pregenerated files repositories)
  • those files are a huge source of version incompatibilities. Humans take care to write code that won't need changing every day when exposed to new dependency versions. Autogenerated files OTOH are often closely linked to an exact generation environment, because generator tool authors assume you'll just regenerate them as needed.
  • for vgo to be successful, it needs a way to identify those files, strip them from .zip modules, and express how to regenerate them to users of the .zip modules

This is aggravated by the current "everything must be in GOPATH" rule

  • everything can not be in GOPATH when the generation depends on proto files published outside GOPATH by multi-language projects
  • the Go toolchain needs to learn to read resources outside its GOPATH
  • the Go toolchain needs to learn to publish resources outside GOPATH, when they are exposed to projects not necessarily written in Go
  • the current workaround of duplicating resources in copies withing GOPATH is another source of version incompatibilities
  • everyone will not copy the same files at the same time
  • sometimes projects copy from several sources, creating one-of-a-kind mixes.
  • every project will QA on its own copy, making Go project composition dangerous
  • the vgo version solver won't work as intended if it computes relationships between code releases but ignores relationships between resource releases.
  • vgo needs to make private copies of things publish outside GOPATH unnecessary to avoid an explosion of incompatible module versions.

A lot of projects have exploited the "every Go directory is a separate package" rule to publish projects with incompatible, or completely broken code in separate subdirectories

  • a lot of other Go projects have used the same rule to cherry-pick specific subdirectories and use them in a context incompatible with the original project unit tests and QA
  • that "works" and produces "reproducible builds" as long as no change in the original project or in the project consumer makes use of some other part of the original project. When that happens the whole sand castle crumbles under the accumulated technical debt,
  • you see projects refusing to update to newer project versions because that requires cleaning up their previous misuses of other projects.
  • you have a strange schizophrenia where Go projects consider themselves not responsible for bugs in the codebases they reuse, and at the same time refuse to apply the fixes published by those very same codebases (I guess we're all humans and unpleasant reality denial is hardwired in the human brain)
  • fixes do not propagate, and everything is "fine" till you hit the bugs that required the fixes in the first place or the unit tests that were invalidated by cherry-picking.
  • an automated security checker that compared the security state of projects, with the security state of the same projects inside the vendor dirs, would have a field day
  • the bigger the project, the more likely it is to have such ticking bombs hidden inside its vendoring
  • by clearly defining project boundaries inside zip modules, vgo will put an end to such practices
  • however it will also entail a major cleanup of many Go codebases
  • for vgo to be successful, it needs to provide tooling to help existing Go projects refactor and split their codebase in modules that make sense from a software compability point of view. Otherwise projects that use vgo will either produce unreconcilable constrains or just lock down the status quo (which works for existing code, but is terrible from a code evolution point of view).

The current practice of exposing the whole Go codebase in a single GOPATH tree with little limits between projects has produced deleterious side effects from a software engineering POW

  • code is not committed in the project where it makes technical sense but in the most convenient repo for the code author (the repo where he has access)
  • Go projects bleed out on one another, with parts of one project depending on parts of another projects, and part of this other project depending on the original project
  • that affectively produces locked down version constrain graphs where you can not move one part without moving all the others
  • no one has the resources to change all the others in a single operation
  • projects deadlock and stop progressing
  • using modules with clear boundaries and trying to make inter-module version constrains explicit will expose those situations
  • that will be massively unpopular (even though the problem exists today, hidden under the veil of vendoring)
  • quite often it's not just a matter of separating a subdirectory: the problem project dependencies can occur in an A/B/C/D directory tree at A and D level but not at B and C.
  • for vgo to be successful, it needs to provide tooling to help existing Go projects refactor and split their codebases in separate modules that follow dependency graph lines

Testfing/fixtures, example and testdata are a whole other can of worms, with their own dependency and dependency version needs

  • they need to be isolated one way or another in separate modules or their needs will poison the solver version resolution
  • if you ignore their needs you effectively decide no one but the original project will run unit tests
  • this is quite dangerous when you allow version upgrades. Problems will happen. You need to detect them
  • integration tests that depend not just on some other code, but a specific environment no one but the original project can replicate (for example, a specific server or website to talk to), probably need specific isolation

I've probably forgotten a lot more things, but those are the most critical.

Lastly:

  • Go projects do not live in isolation
  • some Go projects publish connectors to other languages
  • some projects in other languages publish Go connectors
  • that makes treating semver a different way as those other languages a problem.
  • you can't have everything but the Go part updating to the latest minor version when a multi-language project releases.
  • that would be hell from an operational management POW.
  • automatic upgrade to the latest subrelease plays a huge role in simplifying software constrain graphs by making everyone vacuum older versions (the benefit is technical, the mechanism is social).
  • high fidelity rebuilds are similar to pixel-perfect websites. They look like an obviously good idea, they reward the original author ego, but they're a PITA for anyone that actually tries to use the result because they're not evolvable and adaptable to local (not original) context.

On proxy servers:

for easy debugging, $GOPROXY can even be a file:/// URL pointing at a local tree.

Please make it work with a file:/// of https:/// URL pointing to a directory containing third-party modules in a zip form (and exclude everything else)

That is the simplest way for an entity to coordinate the work of several development teams that rely on subsets of the same third-party projects list: have a QA / legal/ security person responsible for vetting"good" third-party releases and deploying them in a common dir and have everyone else have their work depend on the modules available in this common dir.

That way you are sure no one starts working on a rev incompatible with the work of others, and no one stays on a buggy or dangerous rev already identified by another team, and your dev stations are not continuously waiting on the download of the same copies of software already available locally.

@Merovius I disagree, but I think we're straying off-topic and don't want to bog down the issue discussion. I'm happy to follow up in a different medium, however. (My email is in my github profile and I'm joeshaw on the Gophers slack.)

https://github.com/golang/go/issues/24301#issuecomment-374882685, @tpng:

Which timezone is used for the timestamp in the pseudo-version (v0.0.0-yyyymmddhhmmss-commit)?

As you noted in your edit, UTC. But note also that you never have to type those. You can just type a git commit hash (prefix) and vgo will compute and substitute the correct pseudo-version.

https://github.com/golang/go/issues/24301#issuecomment-374907338, @AlexRouSg:

Will you be addressing C dependencies?

https://github.com/golang/go/issues/24301#issuecomment-376606788, @stevvooe:

what will happen with non-Go resources, such as protobufs or c files?

https://github.com/golang/go/issues/24301#issuecomment-377186949, @nim-nim:

The vgo proposal congratulates itself on having made makefiles unnecessary. This is not quite true. [Discussion of generated code.]

Non-Go development continues to be a non-goal of the go command, so there won't be support for managing C libraries and such, nor will there be explicit support for protocol buffers.

That said, we certainly do understand that using protocol buffers with Go is too difficult, and we'd like to see that addressed separately.

As for generated code more generally, a real cross-language build system is the answer, specifically because we don't want every user to need to have the right generators installed. Better for the author to run the generators and check in the result.

https://github.com/golang/go/issues/24301#issuecomment-375248753, @mrkanister:

I know that the default branch can be changed from master to v2, but this still leaves me with the task to update that every time I release a new major version. Personally, I would rather have a master and a v1 branch, but I am not sure how exactly this would fit the proposal.

As @AlexRouSg and maybe others pointed out, you can do this. I just wanted to confirm their answers. I will also add this to the FAQ.

https://github.com/golang/go/issues/24301#issuecomment-375989173, @aarondl:

Although this works now, is it intended?

Absolutely, yes.

@jamiethermo, thanks so much for comment about Perforce and branches in different directories. I'd forgotten about that feature, but maybe that's what convinced me it was important to allow in vgo.

There's been a nice discussion starting at https://github.com/golang/go/issues/24301#issuecomment-376925845 about vendoring versus proxies. There are clearly two distinct sets of concerns here.

Open source developers tend to want to avoid relying on infrastructure, so they want vendoring, as @joeshaw wrote. To confirm, we'll keep that working, in limited form (only the vendor directory at the top level of the overall target module where you are running go commands).

Enterprise developers have no problem relying on infrastructure - that's just another cost - especially if it brings some larger cost reduction, like not duplicating all their vendored code in every repo and having to spend time keeping it all in sync. Essentially every company we've spoken to wants proxies/mirrors, not vendoring, as @jamiethermo asked for. We'll make sure that works too.

We also very much want to build a shared mirror network that developers have reason to trust and rely on, so that all the open source developers won't feel they must vendor. But that's later. First vgo needs to become go.

https://github.com/golang/go/issues/24301#issuecomment-377220411, @nim-nim:

Please make it work with a file:/// of https:/// URL pointing to a directory containing third-party modules in a zip form (and exclude everything else)

I'm not exactly sure what you are asking for by saying "exclude everything else". If $GOPROXY is set, vgo asks that proxy. It never falls back to anywhere else. That proxy can be served by a static file tree, which is mostly third-party modules in zip form. The file tree must also contain some other metadata files though, for navigation and lookup. Those extra files are unavoidable, since HTTP does not give us a standard way to do things like directory listings.

https://github.com/golang/go/issues/24301#issuecomment-377186949, @nim-nim:

Wow, that's a long comment. I think I agree with most of what you wrote. I want to respond to the last bullet in your post:

  • high fidelity rebuilds are similar to pixel-perfect websites. They look like an obviously good idea, they reward the original author ego, but they're a PITA for anyone that actually tries to use the result because they're not evolvable and adaptable to local (not original) context.

I think I would argue that lock files are like pixel-perfect websites. High-fidelity builds, in contrast, gracefully degrade as context changes. By default, a build will use, say, B 1.2 and C 1.4. But then if it's part of a larger build that needs B 1.3, fine, it will get along with B 1.3 but keep C 1.4 (even if newer ones exist) in the absence of a concrete requirement to upgrade. So really high-fidelity builds are the best of both worlds: faithful to the original as far as possible, but not insisting on pixel-perfect when that's not possible.

https://github.com/golang/go/issues/24301#issuecomment-375415904, @flibustenet:

@rsc I'd like to ask you to be more precise about the road map and what we should do now.
Will it follow the Go policy and feature freeze vgo at 3 month (2 now) ?

Vgo is a prototype and will never be released on its own. It's not subject to the freeze or anything else. But this proposal is to move the ideas and most of the code into the main cmd/go. As part of the release, cmd/go is certainly subject to the freeze. Because it's opt-in and because the vgo-specific code is fairly well isolated from the rest of the operation of the go command, work on vgo-specific parts is fairly low risk and I could see a little of it continuing for a couple weeks into the freeze. Right now I'm focused on the proposal discussion and adjustments. Once the proposal seems to be in good shape without significant problems, then we'll turn to moving code into cmd/go.

Should we now go with our pilgrim's baton asking every libs maintainer to add a go.mod file or should we wait for the proposal to be officially accepted (to be sure that name and syntax will not change) ?

I think the go.mod syntax is likely to change (watch this issue). But as I noted in the proposal, we'll keep accepting old syntaxes forever, and vgo will just update existing files, so it's not a huge deal. That said, I wouldn't go out trying to send PRs to every library you can find until the code lands in the development copy of cmd/go.

https://github.com/golang/go/issues/24301#issuecomment-376640804, @pbx0:

is it easy (today) with vgo to determine if you have more then 1 version of a package in a build?

The easiest thing to do today is to build the binary and then run goversion -m on it (see https://research.swtch.com/vgo-repro). When we have a more general module-aware go list, it should be able to do the same thing without building the binary first.

https://github.com/golang/go/issues/24301#issuecomment-376236546, @buro9:

[Can I make a backwards-incompatible security change, like microcosm-cc/bluemonday@a5d7ef6?]

As @Merovius said, if we're to adopt the Go 1 compatibility guidelines then security changes are explicitly allowed without bumping the major version.

That said, even when you have to do something for security you should still probably strive for minimal disruption. The commit you linked to arguably is more disruptive than necessary, and in a future situation I'd encourage you to approach such a change from the point of view of "how do I break as few clients as possible while still eliminating the security problem?"

For example, don't remove a function. Instead, make the function panic or log.Fatal only if called improperly.

In this case, instead of deleting AllowDocType, I'd have kept it and certainly continued to accept AllowDocType(false), since that's the secure setting. If someone had written a wrapper for your library, perhaps as a command-line program with a -allowdoctype flag, then at least uses of the program without that flag would continue to work.

And then beyond that, it seems like the concern was that the doctype was completely unchecked, but I'd probably have put in minimal checking to keep the most common uses working, and then conservatively reject others. For example, at the least, I'd have kept allowing and maybe also bothered to allow with quoted strings with no &#<>\ chars.

https://github.com/golang/go/issues/24301#issuecomment-375090551, @TocarIP:

[Concerns about not getting updates promptly.]

I really think the opposite will happen, that programs might be more up-to-date, since in minimal version selection it is _impossible_ for one dependency to prevent the overall build from updating to a newer version.

What @Merovius wrote in https://github.com/golang/go/issues/24301#issuecomment-375992900 sounds exactly right to me. The key point is that you only get updates when you ask for them, so things (potentially) break only when you're expecting that and ready to test, debug, and so on. You do have to ask for them, but not significantly more often than in other systems with lock files. And we also want to make it easier to surface warnings like "you are building with a deprecated/insecure version". But it's important not to just update silently as a side effect of non-update operations.

Also added to FAQ.

Thanks to everyone for the great discussion so far and for answering each other's questions. Really great answers from a lot of people, but special thanks to @Merovius and @kardianos. I've updated the FAQ https://github.com/golang/go/issues/24301#issuecomment-371228664 and Discussion Summary https://github.com/golang/go/issues/24301#issuecomment-371228742. There are three important questions not yet answered (they say TODO in the summary), which I will work on next. :-)

@rsc, #24057 has some discussion on using tar instead of zip.

https://github.com/golang/go/issues/24301#issuecomment-375106068, @leonklingele:

If support for go get as we know it today will be deprecated and eventually removed, what's the recommended way to fetch & install (untagged) Go binaries then?

It will still be go get. If the binary's repo is untagged, go get will use the latest commit. But really people publishing binaries should be encouraged to tag the containing repos the same as repos containing libraries (or a mix).

If $GOPATH is deprecated, where will these binaries be installed to?

You don't have to work in $GOPATH anymore, but code is still written to the first directory listed in $GOPATH - it's the source cache, see $GOPATH/src/v after using vgo. Binaries are installed to $GOPATH/bin. As of a few releases ago you don't have to set $GOPATH - it has a default, $HOME/go. So what should happen is that developers stop worrying about setting $GOPATH or even knowing what it is, and they just learn that their binaries are in $HOME/go/bin. They can use $GOBIN to override that location.

@dsnet, thanks I added a link in the discussion summary. Let's keep that discussion over there.

If $GOPROXY is set, vgo asks that proxy. It never falls back to anywhere else. That proxy can be served by a static file tree, which is mostly third-party modules in zip form. The file tree must also contain some other metadata files though, for navigation and lookup. Those extra files are unavoidable, since HTTP does not give us a standard way to do things like directory listings.

As long as the original modules are kept in zip form, removing any temptation to tamper with them, and the indexer is robust and lightweight that's ok.

Though the listing constrains do not apply to files and utilisites like lftp have been able to list http directories for ages (it does not matter if it's non-standard if it works on major http servers). So an index-less operation is probably possible and preferable for small entities that do not wish to invest in infra. yum/dnf/zipper also rely on custom indexes and getting a shared directory indexed is not always as simple as you may think in some organizations.

Open source developers tend to want to avoid relying on infrastructure, so they want vendoring, as @joeshaw wrote

Not really, open source developers mostly want the whole process to be open and transparent, and not to have to rely on someone else's good wishes. So infra is perfectly fine as long as it is itself open source and easy and cheap to deploy locally. Relying on huge blackbox proprietary sites like github clearly does not fall in this category, but that's not the same thing as infra. Open source people did mirrors decades before everyone else. What they will not accept are closed and expensive to set up mirrors (in open source terms expensive is measured in human time)

The open easy and cheap nature of vendoring is appreciated, the vendoring process itself and the way it encourages the progressive fossilization of codebases on obsolete versions of third party code, not so much.

Enterprise developers have no problem relying on infrastructure - that's just another cost

It must be oh-so nice to work at Google :(. Except for the few number of Entreprises with an existing large Go operation where investing in Go infra is a no-brainer, everyone else will have to Go through long and tedious approval processes, if only to justify paying someone to look at the problem. So any infra cost will reduce Go reach and prevent its adoption by new structures.

Entreprise are like opensourcers, they care about cheap and easy. They used to not care at all about open, but that is slowly changing now that they realise it correlates with cheap (to the dismay of traditional Enterprise suppliers that used to specialize on expensive blackbox solutions with expensive consulting to help deployments).

Entreprises that contract internal IT to the lowest bidder will definitively insist on mirrors since they definitely do not want their cheap-ass developers to download broken or dangerous code those devs do not understand from the Internet. They will pay humans and tools to scan the local mirror content for problems, and force internal IT to use it exclusively.

We also very much want to build a shared mirror network that developers have reason to trust and rely on, so that all the open source developers won't feel they must vendor.

Just publish a reference copy of an indexed directory containing modules somewhere. Forget about any proxy-like setup that requires specific web server configuration. That's what open sourcers do and they have no difficulty getting themselves mirrored. As long as mirroring is just copying the content of a directory and does not require a specific webserver config there are lots of organizations willing to mirror.

As for generated code more generally, a real cross-language build system is the answer,

You know as well as me that will never happen, someone will always want to invent a new language that does its own stuff. That's a strawman argument.

That does not prevent Go from normalizing a standard command that launches whatever project-specific process generates the code, with strict guidelines on what it can and can not do (typically it should not do anything already covered by standard Go commands, because those commands should be already fine as is).

specifically because we don't want every user to need to have the right generators installed. Better for the author to run the generators and check in the result.

That would require a major rethink on how existing generators are implemented, because right now they do not care at all about version portability and expect the software environment to be frozen before the generation. With the direct effect that generating makes any later version change without regeneration dangerous. Does not matter if the result is checked by humans or not as humans will only check against the original version set. vgo relies on being able to do later version changes.

So vgo will have to tackle regeneration sooner or later. Later means waiting for projects to discover that vgo updates are dangerous in presence of generated code.

https://github.com/golang/go/issues/24301#issuecomment-374791885, @jimmyfrasche:

This would also affect things like database drivers and image formats that register themselves with another package during init, since multiple major versions of the same package can end up doing this. It's unclear to me what all the repercussions of that would be.

Yes, the problem of code that assumes "there will be just one of these in a program" is real, and it's something we are all going to have to work through to establish new (better) best practices and conventions. I don't think this problem is being introduced by vgo, and vgo arguably makes the situation better than before.

I understand that some people argue that vgo should adopt Dep's rule that there can't even be a 1.x and 2.x together, but that very clearly does not scale to the large code bases we are targeting with Go. It's unworkable to expect entire large programs to upgrade from one API to another all at once, as the vgo-import post shows. I believe essentially all the other package managers allow 1.x and 2.x together for the same reason. Certainly Cargo does.

In general vgo does reduce duplication compared to vendoring. With vendoring it's easy to end up with 1.2, 1.3, and 1.4 of a given package all in one binary, without realizing it, or maybe even three copies of 1.2. At least vgo cuts the possible duplication to one 1.x, one 2.x, and so on.

It's already the case that authors of different packages need to make sure not to try to register the same thing. For example, expvar does http.Handle("/debug/vars") and has essentially staked a claim to that path. I hope we all agree that a third-party package like awesome.io/supervars should not attempt to register the same path. That leaves conflicts between multiple versions of a single package.

If we introduce expvar/v2, then that ends up being a second, different package, just like awesome.io/supervars, and it might conflict with expvar in a large build. Unlike supervars, though, expvar/v2 is owned by the same person or team as expvar, so the two packages can coordinate to share the registration. That would work as follows. Suppose expvar v1.5.0 is the last before we decide to write v2, so v1.5.0 has an http.Handle in it. We want v2 to be the replacement for v1, so we'll move the http.Handle to v2.0.0 and add API in v2 that allows v1 to forward its calls to v2. Then we'll create v1.6.0 that is implemented with this forwarding. v1.6.0 does not call http.Handle; it delegates that to v2.0.0. Now expvar v1.6.0 and expvar/v2 can co-exist, because we planned it that way. The only problem left is what happens if a build uses expvar v1.5.0 with expvar/v2? We need to make sure that doesn't happen. We do that by making expvar/v2 require expvar at v1.6.0 (or later), even though there's no import in that direction, and of course expvar v1.6.0 also requires expvar/v2 at v2.0.0 or later to call its APIs. This requirement cycle lets us ensure that v1.5.0 and v2 are never mixed. Planning for this kind of cross-major-version coordination is exactly why minimal version selection allows cycles in the requirement graph.

The "only one major version" rule combined with allowing cycles in the requirement graph gives authors the tools they need to manage proper coordination of singletons (and migration of singleton ownership) between different major versions of their modules. We can't eliminate the problem, but we can give authors the tools they need to solve it.

I know that protocol buffers in particular have registration problems, and those problems are exacerbated by the very ad-hoc way that protocol .pb.go files are passed around and copied into projects. That's again something that's mostly independent of vgo. We should fix it, but probably by making changes to the way Go protocol buffers are used, not vgo.

https://github.com/golang/go/issues/24301#issuecomment-374739116, @ChrisHines:

I am most worried about the migration path between a pre-vgo world and a vgo world going badly. [more detail]

I’m really happy this was the very first comment on the proposal. It’s clearly the most important thing to get right.

The transition from old go get and the myriad vendoring tools to the new module system has to run incredibly smoothly. I’ve been thinking a lot recently about exactly how that will work.

The original proposal allows developers the option of putting version 2 of a module in a repo subdirectory named v2/. Exercising that option would allow developers to create a repo that uses semantic import versioning, is compatible with modules, and yet is also backwards-compatible with old “go get”. The post describing this option admits that the vast majority of projects will not want to exercise this option, which is fine. It’s only needed for compatiblity if a project is already at v2 or later. At the same time, I underestimated the number of widely-used, large projects that are at v2 or later. For example:

  • github.com/godbus/dbus is at v4.1.0 (imported by 462 packages).
  • github.com/coreos/etcd/clientv3 is at v3.3.3 (imported by 799 packages).
  • k8s.io/client-go/kubernetes is at v6.0.0 (imported by 1928 packages).

To avoid breaking their clients, those projects would need to move to a subdirectory until all clients could be assumed to use modules, and then move back up to the root. That’s a lot to ask.

Another option to to play a trick with git submodules. The old go get command prefers a tag or branch named “go1” over the repo’s default branch, so a project that wanted to enable a smooth transition could create a commit on a “go1” branch that had only a “v2” subdirectory set up as a git submodule pointing back at the real repo root. When “go get” checked out that go1 branch and populated submodules, it would get the right file tree layout. That’s kind of awful, though, and the submodule pointers would need to be updated each time a new release was made.

Both of these have the unfortunate effect that authors must take steps to avoid breaking their users, and even then only for new releases. We’d like instead for users’ code to keep working even without additional work by authors, and ideally even for old releases.

But those are the only ways I can think of to keep unmodified, old go get working as we transition to the module world. If not those, then the alternative is to modify old go get, which really means modify old go build.

Fundamentally, there are two different import path conventions: the old one, with no major versions, and the new one, with major versions at v2 or later. Code in an old tree probably uses the old convention, while code in a new tree - under a directory containing a go.mod file - probably uses the new convention. We need a way to make the two conventions overlap during a transition. If we teach old go get a tiny amount about semantic import versioning, then we can increase the amount of overlap considerably.


Proposed change: Define “new” code as code with a go.mod file in the same directory or a parent directory. The old go get must continue to download code exactly as it always has. I propose that the “go build” step adjust its handling of imports in “new” code. Specifically, if an import in new code says x/y/v2/z but x/y/v2/z does not exist and x/y/go.mod says “module x/y/v2”, then go build will read the import as x/y/z instead. We would push this update as a point release for Go 1.9 and Go 1.10.

Update: Copied out to #25069.


That change should make it possible for module-unaware packages to use newer versions of module-aware packages, whether those packages choose the “major branch” or “major subdirectory” approach to structuring their repos. They would no longer be forced into the subdirectory approach, but it would still be a working option. And developers using older Go releases would still be able to build the code (at least after updating to the point release).

@rsc I've been trying to figure out how we could make the transition to vgo work as well, I've come to the same conclusions that you laid out in your response and your suggestion matches the best approach I've come up with on my own. I like your proposed change.

https://github.com/golang/go/issues/24301#issuecomment-377527249, @rsc:

Then we'll create v1.6.0 that is implemented with this forwarding. v1.6.0 does not call http.Handle; it delegates that to v2.0.0. Now expvar v1.6.0 and expvar/v2 can co-exist, because we planned it that way.

This sounds easier than it is. In reality, in most cases, this mean v1.6.0 have to be a complete rewrite of v1 in form of v2 wrapper _(forwarded call to http.Handle will result in registering another handler - one from v2 - which in turn means all related code also should be from v2 to correctly interact with registered hander)_.

This very likely will change subtle details about v1 behaviour, especially with time, as v2 evolves. Even if we'll be able to compensate these subtle detail changes and emulate v1 good enough in v1.6.x - still, it's a lot of extra work and very likely makes future support of v1 branch (I mean successors of v1.5.0 here) meaningless.

@powerman, I'm absolutely not saying this is trivial. And you only need to coordinate to the extent that v1 and v2 fight over some shared resource like an http registration. But developers who participate in this packaging ecosystem absolutely need to understand that v1 and v2 of their packages will need to coexist in large programs. Many packages won't need any work - yaml and blackfriday, for example, are both on v2 that are completely different from v1 but there's no shared state to fight over, so there's no need for explicit coordination - but others will.

@powerman @rsc
I'm developing a GUI package which means I cannot even have 2+ instances due to the use of the "main" thread. So coming from the worst case singleton scenario, this is what I've decided to do.

  • Only have a v0/v1 release so it is impossible to import 2+ versions

  • Have public code in it's own api version folder, e.g. v1/v2 assuming vgo allows that or maybe api1/api2.

  • Those public api packages will then depend on an internal package so instead of having to rewrite on a v2, it is a rolling rewrite as the package grows and is much easier to handle.

In https://github.com/golang/go/issues/24301#issuecomment-377529520 the proposed change defines "new" code as code with a go.mod file in the same directory or a parent directory. Does this include "synthesized" go.mod files created from reading in dependencies from a Gopkg.toml for example?

@zeebo, yes. If you have a go.mod file in your tree then the assumption is that your code actually builds with vgo. If not, then rm go.mod (or at least don't check it into your repo where others might find it).

@AlexRouSg, your plan for your GUI package makes sense to me.

@rsc hmm.. I'm not sure I understand and sorry if I was unclear. Does a package with only a Gopkg.toml in the file tree count as "new" for the definition?

@rsc

As for generated code more generally, a real cross-language build system is the answer, specifically because we don't want every user to need to have the right generators installed. Better for the author to run the generators and check in the result.

We managed to solve this by mapping protobuf into the GOPATH. Yes, we have it such that casual users don't need the tools to update, but for those modifying and regenerating protobufs, the solution in protobuild works extremely well.

The answer here is pretty disappointing. Finding a new build system that doesn't exist is just a non-answer. The reality here is that we won't rebuild these build systems and we'll continue using what works, avoiding adoption of the new vgo system.

Does vgo just declare bankruptcy for those that liked and adopted GOPATH and worked around its issues?

@zeebo, no, having a Gopkg.toml does not count as new; here "new" means expected to use vgo-style (semantic import versioning) imports.

@stevvooe:

We managed to solve this by mapping protobuf into the GOPATH. ...
Does vgo just declare bankruptcy for those that liked and adopted GOPATH and worked around its issues?

I haven't looked at your protobuild, but in general, yes, we are moving to a non-GOPATH model, and some of the tricks that GOPATH might have enabled will be left behind. For example GOPATH enabled the original godep to simulate vendoring without having vendoring support. That won't be possible anymore. On a quick glance, it looks like protobuild is based on the assumption that it can drop files (pb.go) into other packages that you don't own. That kind of global operation is not going to be supported anymore, no. I'm completely serious and sincere about wanting to make sure that protobufs are well supported, separate from vgo. @neild would probably be interested to hear suggestions but maybe not on this issue.

@stevvooe given @rsc's comments in https://github.com/golang/go/issues/24301#issuecomment-377602765 I've cross referenced https://github.com/golang/protobuf/issues/526 in case that issue ends up covering the vgo angle. If things end up being dealt with elsewhere I'm sure @dsnet et al will signpost us.

Note: I didn't see the previous comments closely, it seems like the problem solved with different approach. Below was my idea.

Just an idea.

How about make vgo get aware specific tag like vgo-v1-lock?
When a repository has the tag, it may ignore other version tags, and pinned to the tag.

So, when a repository tagged v2.1.3 as last version,
but also the owner push vgo-v1-lock tag to the same commit that is tagged v2.1.3
it could write go.mod

require (
    "github.com/owner/repo" vgo-v1-lock
)

It should not get updated even if vgo get -u, until the repository owner changed or removed the tag.
It could make big repositories easier to prepare their moving.

When a library author is prepared, the author could announce to user
that they could manually update by putting "/v2" to it's import path.

How do we handle the case where need to patch a deep dependency (for example to apply a CVE fix that the original author has not yet released in a tag). Seems the vendor strategy could handle this since your could apply a patch to the original authors release. Don't see how vgo can handle this.

@chirino you can use the replace directive in the go.mod file to point to the patched package.

@rsc

On a quick glance, it looks like protobuild is based on the assumption that it can drop files (pb.go) into other packages that you don't own.

This is not, at all, what the project does. It builds up an import path from the GOPATH and the vendor dir. Any protobuf files in your project will then get generated with that import path. It also does things like map imports to specific Go packages.

The benefit of this is that it allows one to generate protobufs in a leaf project that are dependent on other protobufs defined in dependencies without regenerating everything. The GOPATH effectively becomes the import paths for the protobuf files.

The big problem with this proposal is that we completely lose the ability to resolve files in projects relative to Go packages on the filesystem. Most packaging systems have the ability to do this, albeit they make it hard. GOPATH is unique in that it is very easy to do this.

@stevvooe I'm sorry but I guess I'm still confused about what protobuild does. Can you file a new issue "x/vgo: not compatible with protobuild" and give a simple worked example of a file tree that exists today, what protobuild adds to the tree, and why that doesn't work with vgo? Thanks.

What if the module name has to change (lost domain, change of ownership, trademark dispute, etc.)?

@jimmyfrasche

As the user:
Then as a temp fix you can edit the go.mod file to replace the old module with a new one while keeping the same import paths. https://research.swtch.com/vgo-tour

But in the long term, you would want to change all the import paths and edit the go.mod file to use the new module. Basically the same thing you'd have to do with or without vgo.

As the package maintainer:
Just update the go.mod file to change the module import path and tell your users of the change.

@jimmyfrasche,

What if the module name has to change (lost domain, change of ownership, trademark dispute, etc.)?

These are real, pre-existing problems that the vgo proposal does not attempt to address directly, but clearly we should address them eventually. The answer to code disappearing is to have caching proxies (mirrors) along with a reason to trust them; that's future work. The answer to code moving is to add an explicit concept of module or package redirects, much like type aliases are type redirects; that's also future work.

The answer to code disappearing is to have caching proxies (mirrors)

IMO that really is for the enterprises. Most small companies and others would be perfectly fine with vendoring and committing all the dependencies into the same repo

Filed #24916 for the compatibility I mentioned in the comment above.
Also filed #24915 proposing to go back to using git etc directly instead of insisting on HTTPS access. It seems clear that code hosting setups are not ready for API-only yet.

minor proposal to create consistency in mod files with the planned vgo get command

In the "vgo-tour" document, the vgo get command is shown as:

vgo get rsc.io/[email protected]

How about mirroring this format in the mod file? For example:

module "github.com/you/hello"
require (
    "golang.org/x/text" v0.0.0-20180208041248-4e4a3210bb54
    "rsc.io/quote" v1.5.2
)

could be simply:

module "github.com/you/hello"
require (
    "golang.org/x/[email protected]"
    "rsc.io/[email protected]"
)
  • improves consistency with command line
  • single identifier defines item completely
  • better structure for supporting operations defined in the mod file that require multiple versioned package identifiers

Seeking more clarity on how this proposal deals with "binary only" package distribution.

binary library versioning / distribution doesn't seem to show up in any of the description documents around vgo. is there a need to look at this more carefully?

The way it works today, if I can use plain git tool, go get will work just fine. It does not matter if it is a private Github repository or my own Git server. I really love it.

According to what I understand, it is going to be impossible to keep working that way. Is that true? If yes, is it possible to keep the option of using a locally installed git binary in order to checkout the code? (event it is using an explicit CLI flag)

@korya Please see the recently filed issue https://github.com/golang/go/issues/24915

@sdwarwick, re https://github.com/golang/go/issues/24301#issuecomment-382791513 (go.mod syntax), see #24119.

re https://github.com/golang/go/issues/24301#issuecomment-382793364, it's possible that I don't understand what you mean, but go get has never supported binary-only packages, and we're not planning to add support. It's too hard to enumerate all the possible different binaries one might need. Better to require source and be able to recompile when the dependencies or compiler change.

@rsc I believe https://github.com/golang/go/issues/24301#issuecomment-382793364 is referring to
https://golang.org/pkg/go/build/#hdr-Binary_Only_Packages

@AlexRouSg Yes. Those aren't supported by "go get" (but "go build" does support them as a build type), which is what @rsc is referring to. The distribution of those types of packages has to be done externally to the "go get" tooling, and thus likely the same for this new proposal.

I expect the current support for binary-only packages will continue to work as poorly as it has ever worked. I won't go out of my way to remove it.

I updated the discussion summary again. I also filed #25069 for my suggestion earlier about minimal module-awareness in old cmd/go.

@kybin, re https://github.com/golang/go/issues/24301#issuecomment-377662150 and the vgo-v1-lock tag, I see the appeal but adding a special case for that means adding more special cases throughout the rest of the module support. I don't think the benefit is proportional to the cost in this case. People can already use pseudo-versions to get a similar effect. I also worry that the tag will move and/or people will not properly respect backwards compatibility (for example moving vgo1-v1-lock from v2.3.4 to v3.0.0 just to avoid the semantic import versioning). So on balance I think we probably should not do this.

I think it's time to mark this proposal accepted.

There was never any suspense about whether it would be accepted in some form. Instead the goal of the discussion here was to work out the exact form, to identify what we should adjust. As I wrote in the blog post:

I know there are problems with it that the Go team and I can’t see, because Go developers use Go in many clever ways that we don’t know about. The goal of the proposal feedback process is for us all to work together to identify and address the problems in the current proposal, to make sure that the final implementation that ships in a future Go release works well for as many developers as possible. Please point out problems on the proposal discussion issue. I will keep the discussion summary and FAQ updated as feedback arrives.

The discussion summary and FAQ are up-to-date as of right now. The discussion here and off-issue prompted the following important changes:

  • minimal module-awareness in old cmd/go, #25069.
  • restoring minimal vendor support, #25073.
  • restoring support for direct Git access, #24915.
  • drop quotation marks in go.mod, #24641.
  • better support for gopkg.in, #24099, others.
  • support for naming commits by branch identifiers, #24045.

It also prompted discussions about possibly changing the go.mod file name and syntax, but the only resulting change was dropping the quotes.

The discussion has died down, and also it's gotten long enough to be quite painful to load on GitHub (apparently 100 comments is too many!). By marking the proposal accepted, we can focus on using vgo and getting it ready for inclusion in Go 1.11 as a "preview", and we can move to issues that GitHub can load more quickly.

There are, of course, still more bugs to be found and fixed, and more design adjustments to be made. Discussions of those details can be done on new issues specific to those details. Please use an "x/vgo:" prefix and the vgo milestone.

Thanks everyone.

@rsc What is the way to try vgo now? Should we fetch and build github.com/golang/vgo or github.com/golang/go?

@ngrilly keep using go get -u golang.org/x/vgo.

@rsc thank you for the hard work and time you've put in to this proposal. I really hope we eventually get to a point where there is a good story around Go dependency management.

I think it's time to mark this proposal accepted.

I don't think it's appropriate for an individual submitting a proposal to declare when it's ready to be accepted. I think those submitting a proposal, especially one this large and opinionated, should not have a say on it being accepted. Their submission of the proposal communicates their biases. On the other hand, I think authors should definitely have a say in rejecting the proposal if they have a change of heart after raising it.

At this time it feels like there are deep-cutting disagreements around technical decisions within vgo, ones that I fear will fragment the ecosystem _and_ community. With that in mind, I feel we should allow a little more time for competing proposals to be finished and to have their spotlight. Once that happens, we need to have a _neutral_ party that consists of representation from multiple companies, and the community, to facilitate the discussion and ultimate decision.

I've a growing concern that a good portion of the Go leadership have become too isolated from the community behind Go (including other companies who use it), and that the language and ecosystem are starting to hurt as a result. This is why I think we need to have a neutral group, with representation that reflects the users of the Go programming language, help ratify proposals like this.

Ultimately, working as a software engineer inside of Google results in a different perspective compared to a large portion of the industry. Having the critical mass of the core developers being within Google does not contribute to the diversity we need when driving Go forward.

@theckman Your line of thinking seems to be:

  1. Decisions about Go are made by a small team mainly composed of Google engineers.
  2. Google engineers are isolated from the rest of the community, because the needs of Google are very specific.
  3. This leads to decisions which favor Google perspective and are not adapted to other Go users.
  4. This could hurt and fragment the Go community.
  5. To solve this issue, Go should adopt a formal decision process where decisions are made collegially by a group reflecting the diversity of the Go community (something "democratic").

In theory, I'd be inclined to like anything "democratic", but in practice:

  • I don't think the decisions made by the Go team are biased towards the needs of Google, at the expense of "small" Go users. As a developer in a very small shop (quite the opposite of Google), I feel like Go is very well adapted to our needs, and I know other small teams around me happily using Go. Personally, the only gripes I have with the Go language have been acknowledged by the Go team (dependency management, lack of generics, verbose error handling) and I'm confident they are working on it. Could you provide examples of decisions made by the Go team that would have been different and have better served "the rest of the community" if we had a" democratic" process?

  • I'm not convinced that adopting a "democratic" decision process would automatically solve the issues you mentioned and eliminate the risk of "fragmenting" the community. It can be an improvement over the BDFL model, but it is not a guarantee of stability per se. Open source history, and human history in general, provides plenty of examples of democratic societies that have been ruined by disagreements and conflicts.

@theckman, while @ngrilly was trying to be polite, I'll be concrete: if you see any technical issues why vgo proposal isn't ready to be accepted - tell us asap and right here! If you believe some known issues wasn't adequately addressed - tell us. If there are no such cases - what's the difference who'll say "it's time to accept proposal"? We had two month to discuss it, if you believe there is a technical reason why it's not enough - tell us.

It sounds like you wanna add more politics here for no reason, which will just slowdown everything, and that's if we'll be lucky enough and it makes no other harm. And even if you believe there is a reason for this - this is not a right place to discuss this - please start this discussion in maillist instead.

Sorry for one more offtopic comment, everyone!

@powerman Sorry to be lame and quote my earlier comment:

At this time it feels like there are deep-cutting disagreements around technical decisions within vgo, ones that I fear will fragment the ecosystem and community. With that in mind, I feel we should allow a little more time for competing proposals to be finished and to have their spotlight.

I know there are alternative proposals are coming, and my ask was to put the kibosh on accepting this until those have the time of day. We've not had dependency management for awhile now, so I think that's not unreasonable to ask for a short stay while allowing the finishing touches on other proposals.

I'm not sure it's easy to articulate it well in this issue, just because the format is a bit constrained. Even then I'd be hesitant, because I'd basically be duplicating portions of the WIP proposals. I wouldn't want to fragment the ideas or steal the thunder from the authors.

However, knowing those proposals are in progress and who is working on them, it felt like the declaration of this being ready was an active attempt to stifle those competing opinions. I felt compelled to raise these concerns because of patterns I've observed over my years as a Gopher. I genuinely want the language, ecosystem, and community to succeed, and these are being raised solely with that goal in mind.

Edit: To clarify, by short stay I don't mean two months or anything like that. I mean a few weeks.

@ngrilly Sorry, I didn't mean to ignore your comment. I planned to address it to the earlier one, but ended up being more verbose than I wanted.

I think there are two issues. While I feel there's something to be discussed in a different forum around how these decisions are made, which is why I added some context around that, I really want to focus on putting a temporary pause on the acceptance of this proposal until those other proposals have had a chance to become public.

I know there are alternative proposals are coming, and my ask was to put the kibosh on accepting this until those have the time of day. ... [K]nowing those proposals are in progress and who is working on them, it felt like the declaration of this being ready was an active attempt to stifle those competing opinions.

I assure you it was not. I've been pretty clear about the timeline here. My first post from mid-February says the goal is to integrate the vgo proposal into Go 1.11; the development freeze approaches. I know nothing about any other proposals in progress or who is working on them. That's news to me. If people want to engage with this proposal or make counter-proposals, this proposal has been open for a month and a half, so it's getting a bit late.

To be clear, I did not mark the proposal accepted, despite what Golang Weekly and maybe others reported. I only said that I think it's time to do so. That is, I did not apply the Proposal-Accepted label, exactly because I wanted to check that there was general consensus for doing so first. And the general consensus does seems to be in favor of acceptance, at least judging from the overall discussion here as well as the emoji counters on https://github.com/golang/go/issues/24301#issuecomment-384349642.

I know there are alternative proposals are coming

@theckman, if you know about something like this, you're probably the only one who knows it. To date, I've not seen anyone raising this issue until now. I think Russ's statements about wanting to try this for Go 1.11 were very clear from the beginning, so if anyone is working on alternative proposals, they had about 2 months to put them forward, even as a heads-up.

I also think that we can accept the fact that the Go Team has a good track record of not making decisions on a whim, and if we look just at how they pulled the Alias at the last moment from Go 1.8 as it was not the right thing to do at the time, then we should probably give them the courtesy of at least allowing them to build their own experiment/solution.

At the end of the day, the proposal brings a lot more than just an algorithm on how to select what version of the dependency is used. If anyone figures out a way to improve it, then there are two options: submit it via the regular CL process OR build their own tool and let the community use it, should they choose to do so. The Go Team can still provide their own version of the tool imho, so I don't see this has a closed problem.

However, please keep in mind that most of the divisive action where so far taken by the community, not the Go Team. Let's give the Go Team their chance to build a tool, and then evaluate it, when it's viable to do so, then bring arguments to the table on how to improve it and move forward rather than write about how bad it is.

Please consider this as part of a different kind of an experience report: the one where I was a _very_ vocal opponent of the Alias proposal to then understand and now see the proposal in action.

Edit: the original message had a very unfortunate omission record of *not* making decisions on a whim should have been the text, unfortunately the "not" part was missing. My apologies for it.

i have a detailed writeup of foundational concerns with the proposal. i have been trying to finish this writeup so that i could introduce them all at once - this is a complex, subtle domain and as such, these issues must be dealt with in totality - but life and work have made that difficult.

While i have alluded to this writeup in Slack, as well as discussed portions of it directly with @rsc, i have opted not to make mention of these here until now. It seemed to me that advertising this writeup before i was prepared to fully release it would not be terribly constructive. But, as has been noted, it's been two months, so i'm going to make a big push to get the start of the series out next week.

(edit: this is the "alternatives" that @theckman was referring to)

@sdboyer You mentioned that you have multiple concerns. Could you please at least publish their list right now?
I'm working with several systems that take dependency hell to another level (chef, go, npm, composer) and from experience this proposal is solution to all of them in regards to go. It has potential for other systems and languages when go-like tooling especially for static analysis of code will be implemented.

@theckman, can you confirm that you were only referring to @sdboyer's feedback? That's not a secret. Sam mentioned it literally the first day vgo was released ("i am writing more detailed documents about my concerns" - https://sdboyer.io/blog/vgo-and-dep/). But that's feedback, not a proposal, and you referred multiple times to "other proposals", plural. Is there more you know about?

What are the implications of vgo for go/types API users? What's the current status of go/types support?

I received a PR mdempsky/gocode#26 to add a vgo-aware go/types.Importer implementation, but it's unclear to me if/why this is necessary.

Assuming it is necessary, can we add a canonical vgo-aware go/types.Importer somewhere else (e.g., the x/vgo or x/tools repo) so that go/types-based tools don't need to each reimplement this support?

I haven't really followed vgo details, so maybe this is simply "no impact," but I don't see any mention of go/types above. Google searches for "vgo go/types golang" are also similarly non-informative.

Thanks.

@mdempsky, the plan is to have a vgo-aware (and for that matter go build cache-aware) package loader, probably golang.org/x/tools/go/packages, but it doesn't exist yet. People should wait for that package instead of writing code that will need to be thrown away. Don't merge the PR. I commented on it.

@mdempsky was going to reply over in https://github.com/mdempsky/gocode/pull/26 but I'll reply here for now.

https://github.com/mdempsky/gocode/pull/26 is entirely throw-away; just a proof-of-concept that uses a now-abandoned CL against vgo.

I've just seen @rsc reply so I'll simply point out there is also a discussion going on over in https://github.com/golang/go/issues/14120#issuecomment-383994980.

Summary: after some years of experience with versions, the last best attempt was an very complex algorithm, an sat solver. But if you make some simple modifications to the input data, the NP-complete decision problem becomes something not only manageable, but very fast.

As a small-team user of Go with a lot of experience using NPM, I like the vgo proposal very much. FWIW, I am looking forward to vgo being implemented in go proper and the sooner it gets in the better for my team and I.

Not that I'm anyone special, just that since I saw discussion about small team problems I thought I'd chime in.

Here is my review.

On the surface, the _Proposal_ section of the last revision of the proposal document seems fine, if not very specific (e.g. it is unclear to what extent Subversion may be supported). One exception is that “Disallow use of vendor directories, except in one limited use” seems to say that vendor directories will not be supported in non-module packages at all; is that so?

On the other hand, the proposal implies certain design and implementation decisions that undermine various benefits of the current go get. The losses may be acceptable, some may be averted, but if vgo get is to replace go get, they should be addressed as design considerations and discussed, because otherwise we may end up with a tool which is not an adequate replacement, and either vgo won't be merged into go or go get will have to be resurrected as a third party tool.

The _Implementation_ section says: “In a later release (say, Go 1.13), we will end support for go get of non-modules. Support for working in GOPATH will continue indefinitely.” This is troublesome. First, there is a lot of good projects that have not been updated in years. They still work thanks to the Go 1 compatibility promise, but many will not add go.mod files. Second, it forces the developers of projects without dependencies or those who do not care about versions to add module files or abstain from the new go get ecosystem. You may be justified to want this, but please explain why. (For me it seems unnecessarily cumbersome; I'd rather use the fork of the old go get. I assent that package managers for other languages are even more cumbersome, and I'm sure that vgo is better than them, but it does not handle my use cases better than the current go get, with occasional help from govendor).

My main concern about vgo vs go is with the workflow they support. I had expressed it at the vgo-intro post. This might broadly belong under the section _Compatibility_ of the proposal, or it might be out of its scope, but it corresponds with other questions and issues raised here.

For the reference, here is a copy of my vgo-intro comment.

In some later release, we'll remove support for the old, unversioned go get.

While other aspects of the proposal sound fine, this one is unfortunate. (So much that if I had to choose between incorporating versioning into the go toolchain and keeping go get working with version control tools, I would choose the latter.) The advantage of vgo is that it facilitates reproducible builds and delays the breakage of your project due to incompatible updates until you as the author of the project (with the go.mod file) want to face it; but the advantage of go get is that it brings benefits of a monorepo to the multi-repository world: due to complete clones of the dependencies, you can work with them as easily as with your own project (inspect history, edit, diff changes; go to the definition of anything and blame it), it facilitates collaboration (you simply push and propose your changes) and generally imposes the view that at any time there is just one current state of the world — the tip of each project — and everything else is history. I think that this unique approach (outside actual monorepos) is a distinctive boon of the Go ecosystem that did more good than bad, and it should not be abandoned.

A more subtle negative consequence of the proposal is that it makes versioning incurable and inheritable: once a project tags a version, it can not expect future changes to reach users without tagging new versions. Even if the original author remains determined to keep tagging, the authors of the forks are now forced either to tag them too (which is particularly awkward if the source project is still active), or to delete old tags.

On the whole I want to emphasize that the current Go approach to dependency management is overall superior to the versioning. It aligns better with the modern, dynamic and collaborative open source that expects that all commits are visible, and publishing only the sources of releases (or "integrating internal changes" in huge nondescriptive commits) is not enough (because it severely reduces visibility, collaboration and dynamism). It can be seen both in monorepos and in the current Go ecosystem that most projects do not need versions. Of course this approach is not the ultimate, it has downsides, and it's important to support versioned projects too, but this should not be done to the detriment of the versionless.

To summarize, the current go get (with auxiliary tools, e.g. godef) supports the workflow that features:

  • editable source code of the dependencies
  • source code of the dependencies under their VCS
  • latest revisions of the dependencies

I guess I can assume that source code of the dependencies will remain editable, i.e. godef will link to _files_ that are _not write protected_ and _used during the build_. However, vgo is going to renege on the other two points. With respect to the second point, #24915 has prolonged the support for VCS tools, but it still declares the goal to drop it; and the workflow requires not only that dependencies are checked out from VCS, but also that the checkout is useful for developers (e.g. not a shallow git checkout, not a git checkout with .git removed) and is used during the build, but vgo may not satisfy this requirement. With respect to the third point, I have justified its value in the vgo-intro comment, but vgo seems to be abandoning it altogether.

A Go versioning tool does not have to drop support for the current workflow, and it must not drop it to maintain the unique benefits of working in the Go ecosystem and be an adequate replacement for go get. The design of vgo makes this challenging, but not obviously infeasible. The _Proposal_ section of the proposal, on the other hand, seems almost compatible with the current workflow. The only challenge it introduces to supporting the third point (checking out the latest revision) — and it's a big one — is that it makes it difficult to decide for modules ≥v2.0.0 whether they may be checked out at master, or whether they have to be checked out as specified because master is at another major version. This is not a challenge for the current go get with gopkg.in because everything is checked out at master by default, and the stuff at gopkg.in is checked out at the matching tag or branch; but vgo blurs this distinction and spreads the gopkg.in model onto all packages. (Moreover, it stops matching branches.) In effect it becomes impossible to tell for sure and necessary to guess how to get the latest revision of the specified major version.

I may have missed it, but how vgo would work in this scenario?

  • I work on service A and service B both depending on lib X (that I also work on)
  • I need a major change in lib X

With current ways of doing I just do my changes, compile service A and service B, they pick up whatever is in my $GOPATH for lib X, I fix stuff, then I push lib X with a major semver bump, then push both services A and B telling them to use the new major of lib X in their Gopkg.toml.

Now when vgo takes over, go build on my services will try to find non existing new version of lib X from github, and I can foresee all sort of troubles.

So, am I missing something obvious?

You can use the replace directive for this type of thing.

On Mon, Apr 30, 2018, 12:15 Antoine notifications@github.com wrote:

I may have missed it, but how vgo would work in this scenario?

  • I work on service A and service B both depending on lib X (that I
    also work on)
  • I need a major change in lib X

With current ways of doing I just do my changes, compile service A and
service B, they pick up whatever is in my $GOPATH for lib X, I fix stuff,
then I push lib X with a major semver bump, then push both services A and B
telling them to use the new major of lib X in their Gopkg.toml.

Now when vgo takes over, go build on my services will try to find non
existing new version of lib X from github, and I can foresee all sort of
troubles.

So, am I missing something obvious?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/24301#issuecomment-385499702, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAuFsfnD8_kbUj8fSXvGgeN77ki6KYM6ks5tt2LLgaJpZM4Sg3bp
.

@kardianos yeah, but that still mean I have to push my lib changes in order to try?

EDIT: it seems you can use paths for replace (https://github.com/golang/go/issues/24110), which is good. But I can also predict that this will end up committed a lot of times.

Any plans to be able to create an additional file, like go.mod.replace or something like that, so we can define the overrides in dev environments and gitignore them?

@primalmotion To prevent bad commits you probably should use a git hook.

But I suppose the right answer is just don't do this (replace to local path) too often. If your library is so tightly coupled then it shouldn't live in separate repo (and thus pretend it's loosely coupled). In general case you should be able to fix, test and release that lib without taking in account current implementation of these services. Especially with vgo, which guarantee both services will continue using older lib version (which they was using before) until you'll manually update them to newer version of that lib. If you'll occasionally commit replace to local paths once per year - it's not a big deal, CI will help you to notice and fix this in a minutes.

@primalmotion with a go.mod.replace not versioned you can commit a project that will not be reproducible. Instead, with replace in go.mod you're sure that you commit exactly what you currently use and test.
It can be a mistake to commit with a replace but you will notice it and correct it.
Or it can be voluntary, i do it with submodules, it's fine when you work on a project and a lib together and it's reproducible if you commit the submodule. I did it often in Python and missed it with Go. With vgo i'm happy that i can work like that again. _(hope to be clear with my bad english, sorry)._

Well, the problem is that we don’t care about reproducible builds until we decide we care (ie when we prepare releases). We have a very nimble dev environment where updating a lib is just checkout, rebuild, run tests. We don’t commit the Gopkg.lock in master branches of our services, just the toml with fixed versions for external libs, and major constraints on ours. Once we create a release branch, then we commit the Gopkg.lock and only then we have reproducible builds (this is done by our ci).

Vgo basically breaks all the workflows we’ve built over the years. Now just to try something as dumb as a little print debugging in a lib (because we all do this, don’t lie :)), or little optimization, we will have to go over dozens of services, add replace directives everywhere, test, then come back and remove all of them.

What could make vgo work for us:

  • an override system as I mentionned
  • a way to not use it at all and fall back to good old GOPATH for development.

And we really want it to work for us, because it’s awesome.

Can we have something to test in go 1.11 (it is in freeze now) or maybe go 1.12 ?
It is already marked as experimental. And I think the more people test it in real development, the more valuable the feedback will be.

I read about the issue regarding versioned packages. For one simple scenario, if I write a library that say use a dep for parsing plist called foo-plist. For the consequence of parsing, that plist library exposes certain types of its own. Now that library upgrades to v2, and my library is forced to upgrade to v2 if I happened to return any objects of those plist types.

This seems to be quite hard to solve, for example, if I wish to have my library to support both v1 and v2 of the said plist library under the impression of this proposal.

Under npm, for example, my library can simply specify a peer dependency to say >=1.0.0|>=2.0.0 and it is up to the user of my library to decide which version of plist to use. So if the user of my library foo-plist also use another library that depends on plist, then if both of the libraries are happy with v1 and v2 of plist, then the user can choose which ones to actually import. More importantly, if both of the libraries exports plist types, those types are then actually be compatible.

If they end up being different import paths I don't think there are any ways to support that.

@itsnotvalid foo-plist.2 can import foo-plist and reexport its types using type aliases. Good description of this technique can be found here https://github.com/dtolnay/semver-trick

Aram made a point here about the difficulty of back-porting changes to previous release branches (or directories). Because "self" references to sources within the same module also include the version in the import path, either patches will not import cleanly or one may inadvertently introduce cross-version imports.

Thoughts?

To be clear, I'm entirely comfortable with cross-module imports using versions in the import path; I think Russ's arguments have been very convincing. I'm less clear about them for within-module imports. I understand the goal of having a single import path for a given package in a build (regardless of whether it is being imported cross-module or within its own module), but if I had to choose, I'd rather have a solution to Aram's problem than to maintain this property. (Also, I imagine the build system could inject the version from the mod file when building a top-level module, and literally inject it into the source after downloading a module dependency)

@rsc is there any chance that we get this moved forward? I don't see anything major holding this back.

I'm sorry if I seem impatient about this but there are more and more tools working on support for vgo and the more we delay this, the more it's going to make a mess for us, tools maintainers, to go back and forth on this.

@sdboyer plans to publish his write-ups this week. I would say it is fair to wait for them.

I think that the resolving algorithm for selecting which dependency versions should not be the cause to block the entire proposal, which also contains things like modules, the proxy support, versioning, and so on.

And if we, later on, decide to improve/change the algorithm for downloading these dependencies, then we can do so without affecting anything else.

I agree with @dlsniper - we are already well into the freeze, and it's been almost three months since the vgo design was introduced. If its work on 1.11 gets delayed further, I worry that it's going to get pushed back to 1.12.

The first post in what will be a series is finally live. My apologies that this has taken so long to publish, and that it will be yet longer before the series is concluded. But, the first post provides a broad overview of the topics i intend to cover across the entire series, so folks should at least be able to get a sense of scope.

Briefly: there are a lot of great things about vgo, many of which we as a community have wanted for a long time. However, i believe that MVS is unfit for purpose, and should not make it to production. It’s a shame that so many of these things we want have come wrapped around MVS, especially when so few of them are specific to it. i am working on an alternate approach, which is referenced for comparison purposes throughout the blog series, and will be articulated in the final post.

The alternative core algorithm i’m working on will likely be pretty straightforward to migrate go.mod to, so i don’t anticipate that we would have problems it would probably be possible to let this go forward as-is, if a separate lock file was added that contains the transitive closure of dependencies, and have _that_ be what the compiler reads from, rather than the build list algorithm. (There are other reasons for a lock file as well, though that’s going to be in post 5.) At the very least, that gives us an escape valve.

However, If we say MVS is OK, even as a stopgap, then it goes in and gains the advantage of inertia. At that point, we have to prove it inadequate to @rsc (though really, he’s set that as the standard already even before it’s merged), and he believes that this is a true statement about go get, right now:

Today, many programmers mostly don't pay attention to versioning, and everything mostly works fine.

Given all of the above, my fear is that letting this go ahead now will create “generics, round two” - except this time it’s around rules that govern how we interact with one another, not with machines.

However, If we say MVS is OK, even as a stopgap, then it goes in and gains the advantage of inertia.

Note, that right now, dep has the advantage of inertia (both by being built on the same premises as other language's version managers and by existing for longer with broad community support). At least for me, the vgo proposal still managed to overcome this inertia by being a good design, supported by good arguments.

I personally don't mind if versioning in Go gets delayed, I'm all in favor of doing something right rather than quick. But at least at the moment, vgo still looks like the right solution to me (and AIUI many people in the community do perceive this as an urgent problem).

@mvdan This change is not without its repercussions. We all want this, but I think it's also wise to take the extra time to quell doubts, especially when the person raising those doubts has clearly given the problem a great deal of thought.

I keep hearing mention of the freeze and its impact on getting a preview of vgo in 1.11. What’s the official line at this point on whether this will be integrated for 1.11? This issue seems to me to be surprisingly quiet considering the potential impact.

@sdboyer what’s your position on whether this should be merged into 1.11 considering you have only _just_ officially stated, publicly, your position on MVS?

If this doesn’t make it into 1.11, then we won’t have this officially available for preview until 1.12 in February 2019 and officially released until at least 1.13 in August 2019. This puts the earliest potential release 18 months after @rsc first started discussing it. Of course we should not unnecessarily rush this, but as @Merovius stated above, many people, myself included, consider an official response to dependency management an “urgent” issue. Waiting 18 months seems excessive.

It certainly seemed that dep was going to be the official response and we’ve converted our repositories to it (and have been pleased with its results). With this proposal effectively deprecating dep for long term use, yet no official way to start integrating vgo (at least until #25069 is merged), we are left in the unsatisfactory position of being forced to use a tool (with dep) that we know has a very limited shelf life.

FWIW, I absolutely think that we should move forward with this by integrating vgo as a proposal in 1.11 and including #25069 in 1.11 (and as patch releases to 1.9 and 1.10 upon the release of 1.11).

I honestly don’t grok the full implications of MVS and @sdboyer’s concerns about it. However, considering his experience in this space, I do think those concerns deserve serious consideration. That said, if he’s onboard with integrating vgo with MVS in 1.11 (while understanding that his [still evolving] proposal, if accepted [for 1.12], must not break modules designed originally for MVS), then I see no reason not to move forward.

I’d also like to thank @rsc for this proposal. I appreciate that Go didn’t just copy another tool’s approach and is trying to address this issue in a way that seems idiomatic. While dependency management is never fun, it certainly seems that, with this proposal, Go has the potential to push the industry forward and possibly even leapfrog systems that are currently considered best-of-breed.

Just to add my $.02, my opinion is that MVS is an “ah ha” moment for dependency management. I appreciate the amount of thought that people have put towards it, but remain convinced that MVS is where this needs to go.

I especially agree with the points others have raised: “automatic security fixes” are a pipe dream at best and a huge can of worms at worst.

Additionally, I'm with @joshuarubin: an official response to dependency management is an urgent issue. Others have commented that we could move forward with MVS now and later change to other solutions if needed; if that's indeed possible I think that's the better way to go.

I propose to decouple major versions from import paths in the following way. (I believe that I have accounted for the reasoning in vgo-import and that I do not degrade vgo achievements stated there.) This is inspired by the idea from #25069 that go build in Go 1.9 and 1.10 should learn to creatively interpret import paths (by dropping the version part); in my proposal old go build does not change, but vgo learns a similar trick.


Syntactically, the only changes are that:

  1. In .go files, import "repo/v2/pkg" remains import "repo/v2/pkg" if v2 is a directory, but becomes import "repo/pkg" otherwise. This keeps compatibility with the current go get.
  2. In go.mod files, module "repo/v2" remains the same if it is in the v2 subdirectory, but becomes module "repo" if it is at the top level. (This is the canonical import prefix.)
  3. In go.mod files, you may also write require repo v2.3.4 as repo2. Then in .go files you will use import "repo2/pkg" (or import "repo2/v2/pkg" if v2 is a directory). This will not be importable by the current go get (unless you use something like require github.com/owner/project v2.3.4 as gopkg.in/owner/project.v2), but this is only necessary when you want to use multiple major versions in the same module and the dependency does not store major versions in subdirectories, which can not be supported by the current go get anyway.

Technically this allows you to write go.mod with:

require repo v1.0.0
require repo v1.1.1 as repo1
require repo v2.2.2 as repo2
require repo v2.3.3 as repo3

but the minimal version selection will resolve this such that both repo and repo1 refer to the repo at v1.1.1, and repo2 and repo3 at v2.3.3. I don't know if this aliasing should be allowed or prohibited.


Advantages:

  • module-aware code will be compatible with the current go get, even past v2.0.0; consequently:

    • no need to make go get minimally module aware (#25069)

    • projects past v2.0.0 will not have to break compatibility with the module-unaware go get

  • projects will not have to wait for their dependencies to become modules before becoming modules themselves [1]
  • no need to deprecate module-unaware projects or to discourage authors from starting new module-unaware projects
  • easier to keep support for the versionless workflow of the current go get (explained here and above)

Disadvantages:

  • may be inconvenient to keep the promise that already written go.mod files will continue to work (unless the new module file is named differently from go.mod)

Ambivalences:

  • the same import path in different modules may refer to different major versions

    • good: easier to maintain past v2.0.0 and on major version change

    • bad: you do not know which major version you use without looking at go.mod

  • modules may define arbitrary import prefixes for use within their code

    • some users will choose to import everything by a short name (e.g. import "yaml" with require gopkg.in/yaml.v2 v2.2.1 as yaml)

[1] Currently vgo may properly support non-modular dependencies only as long as no non-modular transitive dependency of a module is past v2.0.0. Otherwise the project has to wait for all dependencies that indirectly depend on a project past v2.0.0 to become modules.

I've done an analysis of Gopkg.toml files I was able to find from packages in https://github.com/rsc/corpus and wrote up a summary at https://github.com/zeebo/dep-analysis. Based on the data there, it seems like there's not much evidence that vgo would not be able to handle almost every identified use case.

I truly hope that this will help reduce fear in the community, and help it come to an agreement that we should go forward with the proposal as is, remembering that there will be an additional 6 months to get real experience with the tool, and make any necessary changes to fix any problems that may arise.

If I quote you:

Almost half of all constraints aren't actually constraints at all: they point at the master branch.

This is probably because only master exists an no tags or "named branches like v2, v3" If that's the case then the comparison is not fair because you don't have a choice!

@mvrhov I'm not sure what you mean by "not fair". It would seem to me, that vgo and dep would handle that case just fine. Or rather, any realistic alternative would have to handle that case just fine. In particular: If there is no released version yet, in a vgo world they could just be tagged v1.0/v0.x and no change to any import paths (the main idiosyncrasy of vgo) would be necessary.

The point of the analysis, as far as I can tell, is to try and estimate the real-world pain caused by the different approaches. I don't see how this case introduces actual pain for anyone.

This proposal has been open with active discussions for over two months: @rsc & @spf13 have conducted feedback sessions and gathered valuable input from the community that has resulted in revisions to the proposal. @rsc has also held weekly meetings with @sdboyer in order to gain further feedback. There has been valuable feedback provided on the proposal that has resulted in additional revisions. Increasingly this feedback is on the accompanying implementation rather than the proposal. After considerable review we feel that it is time to accept this proposal and let Go’s broad ecosystem of tool implementers begin making critical adjustments so our user base can have the best possible experience.

There have been two objections of this proposal which we feel we should speak to:

  1. The proposal will require people to change some of their practices around using and releasing libraries.
  2. The proposal fails to provide a technical solution to all possible scenarios that might arise involving incompatibilities.

These are accurate in their observation but working as intended. Authors and users of code _will_ have to change some of their practices around using and releasing libraries, just as developers have adapted to other details of Go, such as running gofmt. Shifting best practices is sometimes the right solution. Similarly, vgo need not handle all possible situations involving incompatibilities. As Russ pointed out in his recent talk at Gophercon Singapore, the only permanent solution to incompatibility is to work together to correct the incompatibility and maintain the Go package ecosystem. Temporary workarounds in a tool like vgo or dep need only work long enough to give developers time to solve the real problem, and vgo does this job well enough.

We appreciate all of the feedback and passion you have brought to this critical issue. The proposal has been accepted.

— The Go Proposal Review Committee

To add some color to the record, the weekly meetings with @sdboyer should not be seen as an endorsement. Sam recently started to write about the problems with MVS along with the things he does like about vgo. I am adding this to make sure there isn't some miscommunication to anyone else who comes along. If you want his opinion please read his words. My take is that they contain a fair amount of disagreement with the current intended approach.

@mattfarina FWIW, I read that sentence more as "we are aware of his criticism (as he's expressed it privately) and it didn't change our opinion". It's regrettable, that his opinions and arguments aren't public by this point, though.

It feels irresponsible to accept a proposal while there are still foundational concerns outstanding on the approach. Consensus between the author and community domain expert @sdboyer seems like a reasonable minimum standard to reach before the proposal is considered accepted.

@merovius Several of us have shared opinions publicly and privately. A number of folks feel issues brought up were steamrolled (sometimes rudely) rather than given a sufficient solutions. I'm starting to share practical problems publicly so that we can try to solve them. For example, just today I shard some details on a practical problem here. Funny side note, this was on the front page of hacker news at the same time this was marked as accepted.

@peterbourgon As a person who put on the Go Dependency Management survey and who worked on Glide where I listened to needs people had and tried to make that work, I can show practical problems (rather than just opinions). That is, I can match desires, needs, and expectations from users to solutions for those problems. My concern is the mismatch in that for the current path of vgo. There are unmet needs and pragmatic problems due to differences in how people do dependency management.

An easy way to start alleviating my concerns is to make vgo work for Kubernetes to Tim Hockins satisfaction.

An easy way to start alleviating my concerns is to make vgo work for Kubernetes to Tim Hockins satisfaction.

Make vgo work for Kubernetes today or make vgo work for Kubernetes over the coming years? As I understand it, one of the fundamental design disagreements between vgo and dep is whether we need to work with the ecosystem as it exists today (dep's assumption) or whether we can shift the community towards doing tagged releases & maintaining compatibility (vgo's assumption).

So it's possible that vgo might not work for many-dependency-using Kubernetes for some time, until the Go community norms shift.

@mattfarina Sure. Personally, I find it very frustrating to paint this as a "steamroll", though. @sdboyer has largely abstained from the public discussion for months and there are still no real, concrete arguments from him. He had his reasons and that's fair. But consensus still requires discussion and at least as far as the public record is concerned, I'm personally not aware of any issues that where brought up and plainly ignored (haven't read your post yet, though).

As far as I'm concerned, any discussion went on behind closed doors. And given that we don't have any information either way, I'd consider it fair to assume that both sides where given appropriate consideration.

@bradfitz SemVer is used by Go pacakges today, generally in PHP, in node.js, in Rust, and in numerous other languages. It's a pretty common thing. I've encountered issues across these languages and more where packages broke from SemVer compatibility issues. Sometimes intentionally and sometimes by accident. What will Go do differently to avoid a problem present in all of these other languages because people are fallible?

If we can't articulate that it's a bad assumption that compatibility will always be maintained and developers should not have knobs accessible to them to tune that and pass that information up the dependency tree.

I think everyone will agree: current situation with dependency management is awful, in any language/platform. I believe @bradfitz correctly explained main point of conflict. Maybe vgo won't success, but to me it's obvious we have to change something (I mean not just in Go, but in general), and vgo looks very promising to give it a try.

@mattfarina We plan to try to implement a service which will automatically control is compatibility actually maintained. Integrating it with godoc.org, providing badges for README, using it as a proxy for go get - there are many ways how we can try to make it work good enough. Sure, @sdboyer is right about API compatibility doesn't guarantee actual compatibility, but that's a good start and should work good enough in most cases.

So it's possible that vgo might not work for many-dependency-using Kubernetes for some time, until the Go community norms shift.

Hope is not a strategy, especially when existing behaviors and expectations are already well established. Go might have had innovation tokens to spend here if we were having this discussion five years ago, and things were more amenable to influence. But as a consequence of ignoring the issue for so long, it seems clear to me that any tooling proposed now must meet users where they are.

What will Go do differently to avoid a problem present in all of these other languages because people are fallible?

We've been discussing some sort of go release command that both makes releases/tagging easy, but also checks API compatibility (like the Go-internal go tool api checker I wrote for Go releases). It might also be able to query godoc.org and find callers of your package and run their tests against your new version too at pre-release time, before any tag is pushed. etc.

find callers of your package and run their tests against your new version too at pre-release time, before any tag is pushed. etc.

This is not really practical for anyone who is not Google.

This is not really practical for anyone who is not Google.

With all the cloud providers starting to offer pay-by-the-second containers-as-a-service, I see no reason we couldn't provide this as an open source tool that anybody can run and pay the $0.57 or $1.34 they need to to run a bazillion tests over a bunch of hosts for a few minutes.

There's not much Google secret sauce when it comes to running tests.

With all the cloud providers starting to offer pay-by-the-second containers-as-a-service, I see no reason we couldn't provide this as an open source tool that anybody can run and pay the $0.57 or $1.34 they need to to run a bazillion tests over a bunch of hosts for a few minutes.

This requires an account with a specific cloud provider, requires that you accept a specific cloud providers terms of service (which you may or may not be able to do for legal reasons, even if most of the world treats this as if it doesn't matter), requires that you live in an area that the cloud provider services (eg. if you're in Iran and the cloud provider is in the United States you may not be able to use it due to export laws), and requires that you have the money to spend to pay the cloud provider (presumably every time you do a release). It may not be much money, but that doesn't mean everyone will be able to pay it. If we want Go to be inclusive and usable by a diverse audience this does not seem like a good solution.

/two-cents

@SamWhited MeteorJS had this working with galaxy, a built in meteor publish command to run your project in a cloud provider. Maybe I'm misunderstanding the problem posed, but their swing at it seemed fine.

@bradfitz What if the API doesn't changes but behavior behind it does? That is a case that breaks from SemVer and impacts those that import it. How do you detect this situation? I ask because I've experienced it on more than one occasion.

open source tool that anybody can run and pay the $0.57 or $1.34 they need to to run a bazillion tests over a bunch of hosts for a few minutes.

This now touches on cost. This may feel fine for folks in a tech city in the US. What about people in Africa, Central America, or other places who are globally distributed. How are tools like this generally accessible outside the "tech elite" circles?

And, what about all the cases of not doing public cloud work? On-premise (a lot of people do it) or proprietary and having trust issues. How will this stuff work for them? If you have an in-house tool you may not want to leak to public services what imports you're using. Say you get your imports from GitHub but this service runs in Google. Do people feel ok handing their dependency tree to Google? A bunch won't.

It might also be able to query godoc.org and find callers of your package and run their tests against your new version too at pre-release time, before any tag is pushed. etc.

Let's take Kubernetes as an example of this. Someone writes a package that's imported into Kubernetes. So a tool has to get that and run all the tests. Parts of it are designed to run on Windows and POSIX. Can we test for multi-OS/multi-arch (since Go handles that). What's that going to really look like (and cost)?

--

I think tools like this can be useful. I don't mean for folks to think otherwise. I just don't see how they solve the problem for many people. They aren't practical enough or don't fit every setup.

It feels like we're trying to solve a mathematical problem with known and controllable constraints. But, people are messy so we need fault tolerant solutions.

To quote @technosophos earlier today:

"version managers aren't actually tools for compilers or linkers or anything... version managers are for people collaborating."

For what it's worth, he's written more than one dependency manager, studied others, and talked with people who've written even more.

It feels irresponsible to accept a proposal while there are still foundational concerns outstanding on the approach. Consensus between the author and community domain expert @sdboyer seems like a reasonable minimum standard to reach before the proposal is considered accepted.

Just to pile on a bit here: we have a history around packaging which is both non-standard and sub-optimal (the go get ecosystem). It would be better to just give up on a standard package manager than to have another go get released which pushes people toward bad practices in the name of compatibility with the "standard" tool. As someone who has been using Go since its public release I find this conversation frustrating and disheartening as it seems the leadership of the Go team has not learned the lessons of the mistakes made with go get (nee goinstall).

There are reasonable and practical problems which have voiced about this proposal. We should change the proposal to address them not simply say: "working as intended." If we can't do it right for 1.11 or 1.12 or 1.13 then we should wait until it can be done right. This proposal proposes a system that behaves significantly different than most other systems and not in a good way.

The motivating reason behind MVS seems to be that the traditional approach is NP-Complete. I find that a very poor motivation. When dealing with NP-Complete problems the main question is: "How often do the hard instances arise." With package management the answer appears to be "very rarely." We should not settle for an incomplete and unhelpful problem formulation just to avoid the NP-HARD label on the problem.

Most of the concrete points have been voiced by other people closer to the issue (@sdboyer @peterbourgon @mattfarina etc...). My main gripe is that we are accepting this proposal when these concrete points have not been adequately addressed.

@SamWhited, you're taking issue over an optional feature of a hypothetical design. The hypothetical user who doesn't trust any cloud provider or can't use any cloud provider inside their country's firewall or doesn't want to pay can always run tests (or a fraction thereof) on their own machine over night. Or just use the go release signature checking, which gets you 95% of the way there for free.

@mattfarina, @SamWhited, let's move discussion to https://github.com/golang/go/issues/25483.

@mattfarina

I've encountered issues across these languages and more where packages broke from SemVer compatibility issues. Sometimes intentionally and sometimes by accident. What will Go do differently to avoid a problem present in all of these other languages because people are fallible?

It is still not clear to me, why vgo is assumed to perform worse in these cases than dep. On the contrary, it seems to me that vgo performs strictly better. In your blogpost you mention a specific problem with helm, as evidence of the failure of the vgo model. However, in the vgo world, there are two scenarios why vgo would have chosen to use v1.4.0 for grpc:

  1. The developers of helm chose to specify grpc >= v1.4.0 in go.mod as a requirement. In that case, they can simply revert this requirement and thus roll back to a previous version of grpc that works for them.
  2. The developers of a dependency of helm chose to specify grpc >= v1.4.0. In that case, dep would have installed that too and if helm would try to roll back by restricting grpc < v1.4.0 dep would have to croak because of conflicting requirements.

So it seems to me, that vgo solves this problem at least as well as dep would. Meanwhile, in a dep world, there is another option:

  1. The transitive dependencies of helm required grpc >= v1.x.0, with some x < 4, then grpc released v1.4.0 and dep decided to use the newer version on install, without being asked. In this case, helm would be broken and need to huddle to do a bugfix release (as described through your post), creating toil. Meanwhile, vgo would have just ignored the new version and things would continue to work fine.

I am apparently overlooking something basic. But last time I asked for clarification about this on slack, I got deferred to a hypothetical future blog post. Hence my frustration, because I really don't understand where the idea is coming from that vgo somehow has more problems with people breaking the semantics of their versions than dep would have.

To clarify, this is not about dep. It's an analysis of how MVS (vgo) behaves in comparison to other SAT-solver based package management solutions (dep included).

Another example by Matt states the following:

You are the developer of module app and you have two dependencies:

  • grpc [>= 1.8]
  • helm, which have the following dependency:

    • grpc [>= 1.0, < 1.4] (because grpc introduced a breaking change in 1.4).

Most people won't know about the transitive dependency requirements (helm -> grpc [>= 1.0, < 1.4]) because that would require being aware that helm breaks when using grpc >= 1.4. I'd assume that's not something the majority of people are going to care about that or spend time and energy investigating it.

If you are using vgo (specifically relying on MVS), you are going to get:

  • helm
  • grpc [>= 1.8]

That should be an invalid combination (since helm requirements are not satisfied) and most dep managers would give an error message telling you that you have a conflict (given that helm has stated its requirements in some format).

This is a significant point of criticism of MVS and vgo. It doesn't want to allow dependencies to state when they have an issue with a specific version to avoid needing a full-SAT solve. You can refer to the Theory and Excluding Modules sections in the MVS article for more explanation.

MVS doesn't want to acknowledge this possibility (or at least wants to restrict the ability to dictate it to the current module) under the assumption that SemVer is always respected and therefore this is not needed, which is practically not always the case. I'd refer to your article about backward compatibility to show why adhering to backward compatibility is hard and is impractical when being forced the vgo way.

The MVS solution is to ask the module developer to specify which versions to exclude, ignoring the knowledge that can be shared by the dependencies' authors about these incompatibilities. Thus transferring the responsibility to the developer to know and do the convoluted logic that most dep managers go through to figure out what versions of which dependencies are compatible with each other (i.e., an SAT solve).

Unless people magically start conforming to SemVer, I'd imagine installing a dependency when using vgo to transform to an eccentric exercise where after importing a module you'd have to check a README file to copy the list of versions that are not compatible to include them in your go.mod file. Keep in mind that the exclude statement takes one version only currently.

@Merovius

In @mattfarina's blog post, they describe Helm upgrading to grpc v1.4.0 intentionally, that is by the equivalent of updating go.mod. The question is not how dep or vgo avoids this problem, but rather how they enable to library author to recover from it. In the case of dep, they can release a patch that ads the grpc<v1.4.0 constraint. For users who possess no other dependency on grpc, this will just work. If a user has already updated to the latest version of helm and possesses another dependency on grpc>=v1.4.0, this constraint will cause their build to fail; not ideal, but better than subtly broken runtime behavior.

With vgo, do the Helm maintainers have any equivalent options available? If a user has upgraded grpc to v1.4.0 for any reason, MVS will always choose [email protected], Helm will be broken, and there's nothing the Helm maintainers can do about it (sans adjusting for the changed behavior, which is often time consuming). I can't speak for @mattfarina, but that is how I read the concern raised and it's one that I share. With dep and similar systems, intermediate libraries can defend against (or more often, recover from) compatibility violations with upper bounds constraints.

@ibrasho @mattfarina

It seems like one of Russ's example in the GopherconSG talk covers a very similar (maybe identical?) scenario.

In the post above, you say that helm depends on "grpc [>= 1.0, < 1.4]". But that is not strictly accurate. I think what you mean is that some specific version of helm depends on "grpc [>= 1.0, < 1.4]". Let's say that version of helm is 2.1.3; presumably this version of helm was released after grpc 1.4 (otherwise it wouldn't have known to mark itself incompatible with grpc 1.4). Russ's point in that talk is that the prior version of helm (say 2.1.2), presumably would not (yet) have marked itself as incompatible with grpc 1.4.

In other words, a satisfying assignment (according to dep) would've been helm = 2.1.2, grpc = 1.8. Dep could correctly choose this version assignment without error, as it satisfies all of the constraints at the given versions.

@balasanjay

Helm's version doesn't fundamentally change the example. [email protected] might declare a dependency on [email protected] and the user of helm might have a dependency on [email protected] directly or through some other package. In this scenario, MVS fails and installs [email protected] and [email protected]. Besides, this example assumes that the user intentionally updates to [email protected] (and thus [email protected]) and perhaps even [email protected], [email protected], etc before the problem is uncovered.

MVS intentionally disallows exclusions from intermediate dependencies to achieve its theoretical goals:

Negative implications (X → ¬ Y, equivalently ¬ X ∨ ¬ Y: if X is installed, then Y must not be installed) cannot be added...

For this example, we need to express, if X=helm>= 2.1.3 is installed then Y=grpc>=1.4.0 must not be installed, which we cannot do by design.

In effect, only the top-level package (by fixing the breakage) and bottom-level users (by manually adding excludes) have any recourse when compatibility rules are broken. Intermediates like Helm must work with all future versions of any dependencies, which, to me, significantly reduces the value proposition of such libraries.

Perhaps this will encourage better behavior in heavily uses libraries like grpc and aws-sdk-go because the full brunt of any breakage will be felt by all parties and will be more difficult to work around. However, it's not always that easy. What if the "breakage" is not really breakage, but a legitimate change in behavior that has unintended consequences for some user, but is helpful for others? That type of thing is not uncommon and upstream authors would be rightfully hesitant to revert a change in that scenario.

I don't think I was nearly as clear as Russ. Let me try again.

Initially, the state of deps looks like this.
Helm 2.1.2: grpc >= 1.0

Then, grpc 1.4 is released. The developers of Helm realize there is an incompatibility, and so push a new version of Helm:
Helm 2.1.3: grpc >= 1.0, < 1.4

I start a new app that depends on Helm and some new feature of grpc (for the sake of argument). I list the following deps:
Helm >= 2.0.0
grpc >= 1.4

The claim above was that dep would notice the inherent conflict, and report an error.

But Russ pointed out that that's not true, because there is a version assignment with no conflicts. Specifically, dep could choose to use helm 2.1.2 and grpc 1.4. According to dep, this is a valid assignment, because helm 2.1.3 is the one that is incompatible with grpc 1.4, whereas 2.1.2 is compatible with all grpc >= 1.0.

In other words, dep could _validly_ choose to "fix" a conflict, by downgrading to a version that hadn't yet recorded the conflict. If you have immutable versions, then this seems inevitable. Therefore, it seems that dep would also mishandle https://codeengineered.com/blog/2018/golang-vgo-broken-dep-tree/.

@balasanjay Sure, but eventually Helm puts out 2.2.0 with a neat new feature you want, and you try to upgrade, and then the problem kicks in.

(I'm cautiously optimistic that this will work out ok if vgo has a well designed UX for making people aware that their MVS outcome has a "conflict" of this form. By "well designed" I mean that it encourages people to deal with the issue instead of just being OK with lots of warnings all the time.)

I certainly agree that there are situations where dep will detect a conflict, but just wanted to be clear that it is _not_ the one presented in that blog post (or atleast, not with the version requirements as currently presented).

And FWIW, I think its reasonable to question the effectiveness of a safeguard if it breaks on a relatively simple example of a conflict.

For the record, I tracked down Russ's example (it is pretty much identical to this one, to an impressive degree): https://youtu.be/F8nrpe0XWRg?t=31m28s

Re: minimal version selection, I'm having trouble figuring out how to resolve the following situation:

Imagine a popular library in maitenence mode, e.g. v1.2.3 has been out for quite a over a year with no changes. Many other libraries depend on it.
A developer comes along and realises a speed optimisation to make in v1.2.3. It speeds the library's core function up by 10x, with no API change! This is released as v1.3.0. No more bugs are found in this package for the following year.

How do dependees end up getting this update? They can work with v1.2.3, but v1.3.0 is evidently better.

@daurnimator

How do dependees end up getting this update?

Each application would need to set 1.3.0 as the minimum version to use.

Well, what a silly day it is today! 😄

This proposal has been open with active discussions for over two months

This problem has been open for years, and the timeline rsc set for all of this was quite artificially short, all things considered.

There have been two objections of this proposal which we feel we should speak to:

  1. The proposal fails to provide a technical solution to all possible scenarios that might arise involving incompatibilities.

If this is an allusion to the positions I have outlined to Russ, it is a misleading caricature. One reason it's taken me so long to marshal my thoughts on this topic is because the repeated use of misleading arguments and statements is...frankly, destabilizing.

Literally no one thinks that ALL possible scenarios can be covered. The problem is, and always has been, that MVS only solves an _illusory_ problem (avoiding SAT), creates _new_ problems in its stead that matter much more in practice, and cannot function effectively as an intermediate layer on which we can reasonably operate.

These are accurate in their observation but working as intended. Authors and users of code will have to change some of their practices around using and releasing libraries, just as developers have adapted to other details of Go, such as running gofmt.

Nothing is helped by trivializing the degree of changes being proposed here by comparing them to running gofmt - possibly the single most mindless activity we perform as Go developers.

As Russ pointed out in his recent talk at Gophercon Singapore, the only permanent solution to incompatibility is to work together to correct the incompatibility and maintain the Go package ecosystem. Temporary workarounds in a tool like vgo or dep need only work long enough to give developers time to solve the real problem, and vgo does this job well enough.

Let's be clear - what matters is the _community_. Us. The people that produce the software ecosystem. We get to a better ecosystem by creating tools that help us to spend our limited tokens on useful collaborations - not by creating brittle tools that punish us for participating in the first place.

@Merovius:

I got deferred to a hypothetical future blog post. Hence my frustration, because I really don't understand where the idea is coming from that vgo somehow has more problems with people breaking the semantics of their versions than dep would have.

Getting all these arguments together takes time, and it's not my dayjob. I was aiming to finish tonight, but I'm still on my final editing pass. 😢Tomorrow morning...? Nothing's set in stone!

The problem is, and always has been, that MVS only solves an illusory problem (avoiding SAT), ...

Respectfully, IMHO the SAT solution is a non-solution, therefore for me, MVS solves a very real problem.

IMHO the SAT solution is a non-solution, therefore for me, MVS solves a very real problem.

Would love to understand the reasoning behind that, if possible?

This problem has been open for years, and the timeline rsc set for all of this was quite artificially short, all things considered.

I see two clearly conflicting ideas here. The surveys have repeatedly shown that dependency management needs a solution sooner than later. Yet a six-month timeline is not enough to review and approve a proposal.

I understand that we shouldn't pick the first solution to be implemented. But time is one of the factors. The counter-proposal to vgo isn't yet well defined, and would very likely take years to come to life. I'd pick vgo in Go 1.11 over a super-awesome SAT-based solution in Go 1.14 any day.

This is also assuming that what vgo implements is set in stone. For all we know, vgo could change considerably during the 1.12 and 1.13 windows.

Would love to understand the reasoning behind that, if possible?

It's the same reasoning that's behind preferring Go's regexp to PCRE, ie. not alowing a program to depend on an algorithm having quadratic/exponential worst case complexity.

We should keep in mind that vgo will just replace go get nothing more nothing else. We should compare vgo with go get and not vgo with something that doesn't exist. "Worse is better" ! Let's focus now on the implementation details.

FWIW, it is IMO completely possible to introduce upper bounds without having a full SAT solver - it is just not possible to have the generated solution be optimal or to guarantee that an existing solution will be found. If need be, you can

  1. Make it possible to specify upper bounds
  2. Ignore them when solving via MVS
  3. Check the found solution against all the bounds and croak if it doesn't fit (this is the key difference: Traditional approaches would try to find a different solution instead)

This would then mean you get the advantages of MVS (a simple algorithm without pathological runtimes), preserving the possibility to mark incompatibilities and detect them statically but you are giving up the guarantee of always find a solution if one exists. Though it can be argued, that the solution you would find, isn't actually a solution anyway, because the incompatibility is still there, just unknown to the algorithm.

So at least to me, it would seem that it's perfectly possible to retrofit some way to get upper bounds unto MVS. I'm still not convinced it's actually needed, but should that turn out to be the case, it can always be added later. Making the "MVS is fundamentally unfit" assertion problematic. But I still might misunderstand the problem. And there may be a failure scenario better illustrating it?

@sdboyer No worries. As I said, I completely understand that this is not your dayjob and I appreciate the time you are taking to participate. It's still a practical issue of "if we don't know the arguments, we can't talk about them". I can imagine that you are just as frustrated with the whole situation as I am :)

It's the same reasoning that's behind preferring Go's regexp to PCRE, ie. not alowing a program to depend on an algorithm having quadratic/exponential worst case complexity.

@cznic I'm of the opinion that dependency management is not only a technical problem, it's intertwined with social constraints and considerations. So I'm not sure comparing it to regular expressions (a purely technical implementation issue) and favoring an algorithm basically based on time complexity is a fair approach.

I see some people favoring MVS because it's simpler and easier to reason around and that's an understandable consideration if it resolves the other aspects of the problem at hand.

I'm was wondering if people have other reasons to prefer it over an SAT-based algorithm.

I was wondering if people have other reasons to prefer it over an SAT-based algorithm.

@ibrasho MVS don't need a lock file.

I'm was wondering if people have other reasons to prefer it over an SAT-based algorithm.

Personally, my reasons are

  1. It unifies lock-file and manifest into one file, which can largely be computer-edited. Meaning less noise while still getting reproducible (or "high-fidelity") builds in the common case. AFAICT this is something unique to MVS. As one of my main frustrations with other package managers is the toil of having to edit manifest files, this is huge to me.
  2. Gradual repair will be easier/made possible. As I'm a strong believer in this approach to solve the distributed development problem, this is also important for me.
  3. It may help the ecosystem to collectively stay close to HEAD for most libraries. This is largely speculation and depends somewhat on the tooling available. But if a "pull everything to the latest version" is sufficiently simple, it can increase the cadence with which newer versions of libraries are tested against each other in production.
  4. It achieves all of this while preserving or surpassing the desirable qualities of other approaches, AFAICT. Upgrades only happen if desired. And binary packages will continue to build even if the build of some of their dependencies gets broken in a new version, giving the author time to work on a fix.

Especially this last part is sort of ironic. Because this exact quality was painted as extremely important in @sdboyer's last post - but at this point, I still believe vgo to be strictly better in this regard than traditional approaches.

@Merovius
"It unifies lock-file and manifest into one file"

  • this isn't entirely true as manifests are effectively scattered all over your files. Import statements effectively act as manifests.

@docmerlin When I said "manifest", I meant a file listing all the modules you are depending on with their appropriate versions (in particular, it contains strictly more information than the import statements, otherwise you wouldn't need it). In the case of vgo that would be go.mod, in the case of dep it's Gopkg.toml. You could also call that the config file for the dependency solver. If you consider a different term more appropriate, feel free to substitute that for manifest, when you are reading my comment.

Note, that it is pretty normal for there to be both a file listing all dependencies and a list of explicit imports (that are a potentially strict subset of the dependencies) per file. In particular, all Go dependency managers I'm aware of do that. Other approaches also use a lock-file which describes specific, precise versions to use to ensure reproducible builds. And the distinguishing feature of MVS I was referring to, is that this file is not needed, because it can be uniquely derived from the manifest/dependency-solver-config/go.mod file.

(deleted + reposted from my personal account)

@cznic the focus on the complexity classes of MVC vs a SAT-based approach in isolation doesn't make sense (as I think @sdboyer writes somewhere -- we get caught up talking about it because it is one of the few things we can name/identify).

@bradfitz's suggestion in #25483 to address some of the concerns with vgo (which I think is a initially crazy but maybe great practical solution) involves running the tests from arbitrary users of your API before a release. This is an NP-complete problem in general, but in practice might be a great solution (just like gps2).

So on the one hand we have SAT-based package management algorithms, and on the other hand we have an algorithm in NL that forces us to do NP-complete work later on (or time out, which is what any practical SAT solver would do for adversarial lock files).

the focus on the complexity classes of MVC vs a SAT-based approach in isolation doesn't make sense ...

I'm not sure where the term 'focus' comes from. It's just that if there are two algorithms available, one's worst case is quadratic or worse and the other is linear od better, I chose the later and avoid the former. I'd call it a principle, not focus.

... suggestion in #25483 to address some of the concerns with vgo ...

Seems like issue #25483 is not related to vgo. Typo?

It's just that if there are two algorithms available, one's worst case is quadratic or worse and the other is linear or better, I chose the later and avoid the former.

@cznic sure, usually if two algorithms give you the same results you want the one with lower complexity (although this isn't even always so cut and dry - Python uses insertion sort for small inputs because despite having worse complexity bounds, it has better constants + runtime up to a point).

In this case (MVS vs. SAT-based algorithms), the results differ on-purpose and have broad consequences. I'm suggesting that because of this you can't just compare the algorithmic complexity, you need to consider their wider impacts.

Seems like issue #25483 is not related to vgo

The first line in that issue is a link to Brad's comment in this issue: https://github.com/golang/go/issues/24301#issuecomment-390788506 . While this tool will be useful outside of vgo, it seems like it is being considered largely to mitigate some of downsides of MVS (in my opinion/understanding).

Yikes, so many comments. I'll try to add some details and history to help.

A couple years ago there were a number of different dependency management solutions (e.g., godep, glide, etc). To figure out a path forward a few things happened:

  1. A group of invested and knowledgeable people came together in a committee. Note, a go team member at Google was part of this group.
  2. A second group of people who had authored dependency managers or had information about them supported this first group.
  3. A survey of the community on the needs and thoughts on existing tools (in Go and outside of Go) was performed. Note, some results were private for the committee's eyes only.
  4. Companies that use Go for production were interviewed to get details about their needs. The details of this are not public so people could speak freely.

The survey and interview data was all fed back into the committee. After looking over the data and debating it they decided we needed a solution with certain features and created a spec. After that development on dep began to meet those needs.

At Gophercon last year there was a contributor summit that included people who were invested in dependency management talking about it. Near the end of those conversations Russ came over to the table and said something like, I can do better if I go off on my own and build something. He did that and came up with vgo. It was done separately from the community.

Does vgo meet the needs people expressed in the survey and interviews? From what I gather from Russ, he's not read the results but he has criticized the process. It might be worth someone doing a mapping.

Oh, at the summit Russ did share one of the reasons, at that time, he didn't like a SAT solver. He wanted a solution with fewer lines of code because he didn't want the Go Team at Google on the hook for maintaining that much code. I remember that specifically because I did some LOC comparisons between dep, glide, and other tools after that to get a better picture of the differences.

Go is a Google owned and run project. If the issue is one of resources should Go be under a foundation and people from other organizations become involved in ownership of it? That's another way to add resources to maintaining the toolchain.

No, #25483 is not related to vgo. I just listed it offhand as another sort of thing that a hypothetical go release helper command could do. But it would be useful at any time.

My larger point was that Go has never made it super easy to do releases of Go packages. In a prior life I wrote http://search.cpan.org/dist/ShipIt/ for automating releases of Perl CPAN packages and it made a huge difference when you have tooling around such things versus doing it by hand.

I only mentioned a hypothetical go release at all because I was asked what Go might do differently to help humans not make mistakes.

The one actual problem I see with vgo seems trivial to fix.... the max version problem. If one library breaks with v1.7 of a dep, there's no way to specify that other than an exclude... which only helps until v1.8 comes out, which is likely still broken in the exact same way.

It seems like it would be trivial to add max version to vgo just as a way to help vgo determine when two libraries are incompatible. If one library says it needs at least v1.7 and one library says it needs

Without this, vgo will just use 1.7, and one of the libraries will break. If you're lucky, there's a compiler error.. more likely there's just a subtle behavior bug that might not get noticed until way down the road. And what's insidious is that it's probably some transitive dependency you're not even calling directly.

@natefinch max version restrictions nudge MVS out of its very clever avoiding-SMT-solver territory.

I think @natefinch means max versions as a final check/filter. Once MVS has done its job, vgo would error if any of these max-version restrictions were not satisfied. That still doesn't get us into SAT solver territory.

Exactly. There's no solving. There's just "vgo says 1.7 is the right thing to use, but module X states it doesn't work with that version". This is already something that can happen today with vgo if you have something that says require foo 1.7 and something else says exclude foo 1.7, if there's no higher foo version.

And then what are you supposed to do?

You use your human brain to figure out a solution, because there's no mechanical solution. You're trying to use two libraries that state categorically they require incompatible versions of a library. A will break with 1.6, B will break with 1.7. The end.

You need to either convince the dependency author to fix the bug (if it's a bug), convince one of the library authors to release a new version that is compatible with the new dependency version, or fork a whole bunch of stuff and fix it yourself.

There's no good answer. But there's no tooling in the world that can fix this for you, either.

@sdboyer

The problem is, and always has been, that MVS only solves an illusory problem (avoiding SAT),

I can't speak to your private conversations with Russ, but “avoiding SAT” seems like a strawman argument to me. “Avoiding SAT” is no more a goal than “using SAT”: neither has any concrete impact on users.

As I see it, the concrete problems that MVS solves are:

  • making builds reproducibile by default,
  • building as close to the tested versions as feasible, and
  • avoiding spurious rejections.

We get to a better ecosystem by creating tools that help us to spend our limited tokens on useful collaborations - not by creating brittle tools that punish us for participating in the first place.

I agree. But I want to take a closer look at your assumptions: which tools “punish” participation and why?

One of the examples you gave in your post was, “Our project depends on [email protected] right now, but it doesn’t work with [email protected] or newer. We want to be good citizens and adapt, but we just don’t have the bandwidth right now.”


The first (implicit) assumption you're making is that you (or your users) have had the time and bandwidth to test your project exhaustively against all releases of its dependencies.¹ For some projects, that assumption in and of itself may not hold: by setting the expectation that everyone tags upper bounds when they discover incompatibilities, you end up “punishing” maintainers who don't have the bandwidth to test for regressions and update bounds on potentially all of their releases.

In contrast, under MVS, maintainers are only obligated to declare one thing: “We tested thoroughly against [email protected] and it worked.” If your users don't do anything to disrupt that, they build against [email protected] and everything continues to work as it did before.

Under MVS, _breakage only occurs at the time of an explicit upgrade._ That's a significant benefit: if your user upgrades some other dependency so that it pulls in [email protected] and breaks their use of your package, then they can simply back out that upgrade until you have time to fix it, _or until the maintainer of X has time to correct the incompatibility._ And given the right expectations, the latter may be much more likely than the former: notching out [email protected] may turn out to be needless busy-work if [email protected] restores compatibility.


The second assumption you're making is that every incompatibility that affects _any part_ of your module affects _all users_ of your module. If the part of your package that breaks under [email protected] is a little-used function, why hold back your users from upgrading X in their program that doesn't even call it?

You could argue that if some parts of the module don't depend on other parts, they should be separate modules, but that again forces extra work onto package maintainers: if I only have the bandwidth to test and upgrade periodically, I may only have the bandwidth to maintain one module, not a whole complex of fine-grained modules with fine-grained dependencies.


¹ In general, detecting incompatibilities with dependencies requires you to test against _the cross product of all releases_ of those dependencies: it's possible that your particular use of [email protected] is fine, but it breaks in combination with some other package you depend on. For example, perhaps you and your dependency require incompatible options in a global configuration.

they can simply back out that upgrade

That assumes this is a compiler error or something similarly highly visible. It's a lot more likely to be a subtle behavior bug that isn't immediately apparent. You update, run your tests, and all looks good. Maybe a timeout that was 10 seconds is now 30, and that throws off the timeouts in the rest of your stack when under load. You wouldn't notice until it's running in production (something similar happened to us with a change in mgo a few years ago).

If the part of your package that breaks under [email protected] is a little-used function, why hold back your users from upgrading X in their program that doesn't even call it?

Because they probably won't know if they are or aren't using that function if it's a transitive dependency, and who's to say that some unused codepath won't get suddenly activated down the road when you change your code? So that broken function is now getting called. It's far better to trust the library maintainer who is an expert at understanding their own library, that if they say it doesn't work with 1.7, don't build with 1.7.

That assumes this is a compiler error or something similarly highly visible. It's a lot more likely to be a subtle behavior bug that isn't immediately apparent.

There are subtle bugs lurking in nearly every version of nearly every software package. We don't generally mark releases as completely unusable just because we've found one such bug; instead, we fix it in a later point release.

What makes this particular class of bug special?

who's to say that some unused codepath won't get suddenly activated down the road when you change your code?

That's exactly the MVS argument: if you change your code, then you detect the breakage _at that revision,_ and you can update the version constraint at that revision too.

That does imply that, as a user, it is easier to bisect failures if you upgrade your dependencies one-at-a-time rather than in bulk, but that's true of changes in general: the smaller your changes, the more precisely you can bisect them, regardless of whether they are changes to your code or its dependencies.

What makes this particular class of bug special?

It's not just a bug. It's bigger than that. The library author, who is the foremost expert on their code, has told the world "yo, version 1.7 of X breaks my stuff in ways that are so bad, just don't even build with it".

Clearly, unless it's a compiler error, it's a judgement call. If 99 of your functions panic, but one doesn't... is that just a bug? Or is it a complete incompatibility? What if it's just one function that panics?

At some point, a human has to make that decision. I would much rather deal with a pre-emptive declared incompatibility that occurs at build-time, than worry about an undeclared major problem making it into production.

It's not just a bug. It's bigger than that. The library author, who is the foremost expert on their code, has told the world "yo, version 1.7 of X breaks my stuff in ways that are so bad, just don't even build with it".

There is another way for a library author to express this: Write a test that fails when used with an incompatible version of X. This will have the advantage of requiring no changes to the library if X releases a fixed version, as well as catching any future regressions.

Write a test that fails when used with an incompatible version of X.

Yeah, I thought of that. But then you're asking every top-level binary to run all the tests for all transitive dependencies, tests which might require infrastructure that you don't have. Not all test suites are 100% isolated.

There's a reason why go test ./... doesn't run tests in the vendor directory.

To summarize ( likely poorly) it seems like the main technical argument against vgo is that there needs to be a way for libraries to declare globally respected constraints on their own dependencies.

Why not have both sides agree to sanction a competing proposal that explores how GPS2 could fit in with the parts of vgo both sides like. Then, merge the vgo experiment in the meantime and revisit the GPS2 proposal before vgo merges mainline?

As someone who struggled with many dependency tools in the past I am excited for vgo. FWIW, as a "community" member, I feel well represented by the vgo solution far. However, I remain open to considering the arguments in favor of adding global version constraints and look forward to that argument developing further.

Just throwing this out there:

That assumes this is a compiler error or something similarly highly visible. It's a lot more likely to be a subtle behavior bug that isn't immediately apparent. You update, run your tests, and all looks good. Maybe a timeout that was 10 seconds is now 30, and that throws off the timeouts in the rest of your stack when under load.

This is an issue in software in general. NPM (the other versioning tool I personally have the most familiarity with) uses an SAT solver combined with a community that has strongly embraced semver, and this problem still exists.

As far as the transitive dependency discussion, the reality is there is no magic wand: at some point the developer must be aware of the dependencies they use and put in the time to properly test the code for which they are ultimately responsible. My employer can't be the only one where we are required to justify every library we use, including transitive libraries, for legal and other reasons.

Honestly, to my eye, a lot of the complaints towards vgo seem to be "it's not perfect, therefore it won't work". Let's not fail to implement a good solution for want of a perfect solution.

To me, it seems very aligned with the overall Go philosophy for the core team to provide a tool that is good in most situations, leaving it to the community to provide more advanced and/or specific tooling.

@malexdev There may be a problem with your actors in the argument that:

Let's not fail to implement a good solution for want of a perfect solution.

dep is a good solution today. One that was created by the community working together. vgo, as started earlier, makes a couple assumptions:

  1. That no one will ever break from semver, even by accident
  2. That we will need to rework the existing codebases in the existing ecosystem

vgo is more based on the "perfect solution" in a "perfect world" while dep works today following what already works in other programming languages ecosystem while being fault tolerant.

You can see this in the history of the package management problem space I just wrote up.

To summarize ( likely poorly) it seems like the main technical argument against vgo is that there needs to be a way for libraries to declare globally respected constraints on their own dependencies.

I agree with this summary.

Why not have both sides agree to sanction a competing proposal that explores how GPS2 could fit in with the parts of vgo both sides like.

At this point, I remain unconvinced that GPS2 is a necessity or even a good idea. The ability you lined out above can be retrofitted to vgo on top of MVS (like this or this). Personally, I'm hoping that @sdboyer's next blog post will contain good arguments against MVS itself, but right now, I don't really see any reason for a different algorithm - especially one that would cost significant UX advantages of vgo.

Though, to be fair, I also don't see any reason against experimenting with that.

dep is a good solution today.

I'm not sure I agree. I haven't used it myself, but on slack there where several people complaining about issues that seem directly traceable to its SAT solving approach. And I do know that I'm very unhappy about deps overall design (leaving aside the question of the dependency solver itself, so general workflow and UX).

  1. That no one will ever break from semver, even by accident

I am sorry, but I have asked repeatedly for justification of this statement and didn't get any. Why would vgo assume this in any way, shape or form more than dep? I fundamentally disagree with this statement. It is an assumption made to explain the algorithm. Just like if you'd explain dep's algorithm, you'd explain the assumptions built into semantic versions. At worst, they show the same breakages if you fail to comply with those semantics.


I think in general it makes sense to distinguish between different pieces of what we are talking about. Some of the complaints are regarding SIV, some MVS and some the general vgo-shell. For example, it is true that MVS can't handle upper version bounds, but that does not mean that vgo couldn't handle upper version bounds. SIV requires changing lots of code out there (by rewriting import statements), but again, that does not even necessarily mean vgo would require that. Though to be clear, I also don't think it's that big of a deal, as long as we can migrate. Which AFAICT we can.

I just watched @rsc's GopherConSG opening keynote about versioning. He addressed the scenario where a dependency introduces a breaking change, and compared how vgo and dep would handle it, which seems to be the main concern here. (It's a great watch.)

If I understood his point correctly, dep may break the build as well if a maximum version limitation is used to avoid a bad version. I'd be very interested to see this point addressed by those here who are concerned that vgo falls short of dep in this regard.

@willfaught To us, dep breaking the build when a maximum version limitation is used to avoid a bad version is considered a success! This is what we want to happen. Russ correctly notes that this problem is not resolved automatically. A constraint on Helm>2.0.0 is not going to upgrade the user automatically to "[email protected]" but it would work successfully (downgrade grpc or trigger a build failure if that's impossible) if the user explicitly depended on "Helm==2.1.4". Personally, the first thing I usually try when encountering an issue with a library is forcing an update to the latest version. With Dep, this would inform me of the failure introduced by GRPC 1.4.0. With vgo, the only way for Helm to communicate this to me is through documentation.

To repeat the point because it continues not to be understood: neither dep nor vgo can prevent this problem from occurring or provide a foolproof solution. Rather, dep allows Helm to communicate that the problem exists, whereas vgo does not.

To summarize ( likely poorly) it seems like the main technical argument against vgo is that there needs to be a way for libraries to declare globally respected constraints on their own dependencies.

I would add some color to this summary in that these constraints are needed to deal with the unhappy path of when compatibility rules are violated. vgo establishes the import compatibility rule and then proceeds to develop a solution assuming that rule is always followed. In this idealized world, the need for upper bounds is limited or even nonexistent. In the real world, this will not happen: developers will release updates that break compatibility, either intentionally or otherwise.

I think @Merovius is on to a workable solution. MVS would proceed as specified, then after resolution is complete, vgo would check each resolved package for exclusions. If any are found, these are reported to the end user, who can choose to either alter their dependencies so as to meet these constraints or choose to override and ignore the constraints. I'm been on the flip side of this too and sometimes you know better than the maintainer; it works for your application and that's all you care about.

This restores a path for intermediary libraries to communicate an incompatibility to end users. To repeat the helm example yet again:

[email protected]: Helm @2.1.2
[email protected]: GRPC @1.3.5

==> User upgrades to [email protected] intentionally.

User: Helm @2.1.3, [email protected] ("Helm updated, so I can finally use grpc 1.4!")
Helm: GRPC @1.4.0

==> Bug detected! User is seeing some problems with Helm, so check for a new version.

User: [email protected], [email protected]
Helm: GRPC @1.3.5, weak GRPC <1.4.0

User sees a warning that GRPC 1.4.0 is rejected by the install of [email protected]. Since Helm is broken for me and that's what I'm trying to fix, I remove my dependency on GRPC 1.4.0 (sadly loosing some useful functionality) and rerun MVS. This time, MVS resolves GRPC to 1.3.5 and Helm to 2.1.4, rechecks all the weak constraints, finds they hold, and I'm done.

I don't expect any tool to resolve these problems magically but I do expect some recourse as a middleware library. So far as I can tell, the only option in vgo is to fork and rename all my upstream dependencies (or equivalently, copy them into my project) if I want to insulate my users from compatibility issues with these dependencies. I don't think anyone wants that.

@willfaught To us, dep breaking the build when a maximum version limitation is used to avoid a bad version is considered a success! This is what we want to happen.

The point made in the talk is that vgo is no worse than dep in this scenario. In his example, the build isn't broken in the sense that dep can't find a solution; it's broken in the sense that dep does find a solution, and that solution includes the bad version, resulting in the same bad situation we wanted to avoid.

You really should see the video, which walks through an excellent example, but here's the gist as I understood/remember it:

  • Package versions (including their dependency requirements) are immutable
  • To add a maximum version limitation to an existing dependency for your package, you have to publish a new version of your package.
  • It's possible that dep will choose the previous version of your package in order to satisfy all requirements, in which case your new maximum version limitation will not be present. This allows the bad version to be used after all.

I mostly agree with the proposal to add maximum version exclusions, but I do have this worry: Suppose I put in my library "use gRPC >1.4, <1.8" then in gRPC 1.9, the authors decide, "you know what, Helm was right, we made a breaking change in 1.8, we're reverting to our prior behavior in 1.9.0." Now people trying to import Helm+gRPC won't be able to use 1.9 until Helm releases a version that says "use gRPC >1.4, except 1.8.0, 1.8.1, 1.8.2, but 1.9+ is cool".

In other words, maybe exclude grpc 1.8 is sufficient because we won't know if gRPC 1.9 will be incompatible or not until it's published, at which point Helm can either add it to the exclude list or not.

I know essentially nothing about this space. But from reading the discussion here, it sounds like the biggest disagreement boils down to how to detect erroneous cases. That is, both MVS (vgo) and SAT (dep) handle normal situations more or less well, perhaps not identically, but well enough.

SAT provides an ability that MVS does not: using SAT, the author of the library package P1 can declare "P1 requires P2, and works with P2Version > 1.1 and P2Version < 1.4". MVS can only declare "P1 requires P2, and works with P2Version > 1.1", and can not express the restriction "P2Version < 1.4". In the normal case, this doesn't matter. It only matters if some operation tries to upgrade P2 to version 1.4. In that case, SAT will report an error, while MVS will not. When using MVS, if the incompatibility is not a compilation error, it may cause a failure long after the fact.

No doubt the SAT supporters see other major problems with MVS, but so far this is the one that I understand.

I think it's worth noting that if the restriction expressions are themselves versioned--if they are part of a specific release of P1--then in the normal course of events, before P2 version 1.4 is released, P1 version 2.2 will happily say "P2Version > 1.1". It is only when P2 version 1.4 is released that the P1 authors will notice the incompatibility, and release P1 version 2.3 with "P2Version > 1.1 and P2Version < 1.4". So if you are using P1 version 2.2, neither SAT nor MVS will report any problem with upgrading P2 to version 1.4, although it will fail in some possibly subtle way.

In other words, while it makes perfect sense for a release of P1 to list minimum compatible versions of P2, if the release does work correctly with the most recent version of P2, then it does not make sense for a release to list maximum compatible versions. The maximum compatible version will be either conservative, and therefore increasingly wrong as newer and better versions of P2 appear, or if P2 changes in some incompatible way in the future, will fail to specify that requirement since at the time of the release it doesn't exist.

So if we want to have a system that defines anything other than minimum version requirements, then those requirements must not be part of a specific release, but must instead be part of some sort of metadata associated with the package, metadata that can be fetched at any time without updating the package itself. And that means that the operation "update this package" must be separate from the operation "check whether my current package versions are compatible."

I would claim further--and this is definitely more tenuous than the above--that if "check whether my current package versions are compatible" fails, that it is in general unwise to trust any tool to resolve the problem. If the compatibility problem can not be solved by the simple operation "upgrade every relevant package to the current version", then it requires thought. A tool can guide in that thought, but in general it can't replace it. In particular it seems very unwise for a tool to start downgrading packages automatically.

So if we think in terms of

  1. a way to define package metadata describing package incompatibiliities
  2. based on that, a tool that reports whether your current packages are compatible

then perhaps some of the major differences between MVS and SAT become less important.

Thanks for saying that so well Ian. To follow up, once we have established versions and vgo, we absolutely want to have a new godoc.org (maybe a different name) that records additional information about packages, information that the go command can consult. And some of that information would be pair-wise incompatibility that the go command could report as warnings or errors in any particular build (that is, reporting the damage, not trying to hide it by working around it). But having versions at all in the core toolchain is the first step, and that, along with just minimal version requirements and semantic import versioning, is what has been accepted in this issue.


We are committed to landing this as smoothly as possible. That will require additional tooling, more educational outreach, and PRs to fix issues in existing packages. All that was blocked on accepting this proposal, since it seemed presumptuous to move forward without the overall approach being accepted. But the proposal is accepted, and work will start landing more aggressively now that the uncertainty is over.

I had the same thought about external info for version compatibility... since version compatibility must be constant across , it doesn't need to be in source control (and in fact being in source control is a definite disadvantage as stated above). It would be nice if there were a proposed solution for this, since it definitely seems to be the one major problem with MVS as proposed.

It's awesome to see the discussion moving organically in this direction. It has been a one central thrust of my concerns, and it makes it so much easier to explain foundational issues when folks are already most of the way to it.

@ianlancetaylor, i think you're spot on with this observation about needing to be able to make changes to constraint information on already-released versions. As @rsc indicated, such a service is something we've discussed/i suggested in our meetings. We could do it with godoc.org, or something else, sure. But i actually don't think it entails a separate service, and it would be better without one. I made a quick reference to this in the piece i published on Friday (just up from that anchor). If nothing else, in a service, there are questions that then have to be answered about whose declaration of incompatibilities should show up in warnings, which means handling identity, and how we scope declarations to particular situations in the depgraph. Keeping the declarations inside metadata files means we don't have to worry about any of that. But more on that in a sec.

What's really important here is this point, though maybe not the way you intended it:

perhaps some of the major differences between MVS and SAT become less important.

The suggestion of a meta-tool that does this search - yes, that's a SAT search - as a solution to the problems folks are identifying is telling. It's pretty much exactly what we'll have to turn dep into, if MVS goes ahead as described. And the first thing to note there is that, if we're so concerned about these incompatibilities that we're talking about a search tool, then what we're actually saying is that MVS becomes just a step in a larger algorithm, and the grokkability benefits go right out the window.

Except it's worse than that, because no amount of meta tooling can get around the baked-in problem of information loss that arises from compacting minimum and current versions together. The big result of that is cascading rollbacks, which means that actually trying to remediate any of the incompatibilities in this list will very likely end up tossing back other parts of the dependency graph not-necessarily-related to your problem. And, developers won't be able to follow an update strategy that isn't harmful to others.. (Oh, and phantom rules, but that's just an MVS side effect in general).

This is why i've asserted that MVS is an unsuitable intermediate layer on which to build a higher-order tool like this - "not fit for purpose." It's clear that folks believe these incompatibilities will occur, so MVS Is just taking a hard problem and making it harder.

If instead, we unify the problem of an "incompatibility service" back into a metadata file, then i believe it's possible, using only a simple set of pairwise declarations, to achieve the same effect. (This is a draft of the concept, but it increasingly seems to hang together)

It would entail that parts of MVS change, but MVS could still run atop the information encoded there. That'd mostly be useful if incompatibilities truly go nuts, and you want to just avoid all of them. But the primary algorithm would start from a baseline that looks like MVS, then switch to a broader search (to be clear, MVS itself should still be considered search), without the possibility of moving into absurdly old versions.

(note, i'll be on vacation this week, so won't be responding till next weekend)

@sdboyer

The suggestion of a meta-tool that does this search - yes, that's a SAT search

Can you be more specific? The sentence you quoted is right after Ian suggesting a tool to report whether the selected versions are compatible - and to the best of my knowledge, that is the main alternative suggested here (it certainly is what I intended above). That problem most definitely is not a search and it's trivial and doesn't require solving SAT (it is just evaluating a boolean formula for a given set of values, not trying to find values that satisfy it).

Right, simply reporting that there are some known-incompatible values in the formula does not require solving SAT. Taking any action on that basis, such as for a tool that assists in the process of finding a result with no such values in it.

I quoted that sentence not because i think it is indicative of people having accepted search as always necessary, but because if we believe that reporting those such conditions is important, then it is because we believe it is likely we will encounter such scenarios.

the problem is, once the plausibility and the importance of addressing those cases gets established, it looks like folks then make the erroneous jump that "we can just do all the search things on top of MVS, and it'll be fine." we can, but such attempts become much trickier to deal with because of the useful possible paths that MVS cuts off, by design.

On May 28, 2018 4:02:13 PM EDT, Axel Wagner notifications@github.com wrote:

@sdboyer

The suggestion of a meta-tool that does this search - yes, that's a
SAT search

Can you be more specific? The sentence you quoted is right after Ian
suggesting a tool to report whether the selected versions are
compatible
- and to the best of my knowledge, that is the main
alternative suggested here (it certainly is what I intended above).
That problem most definitely is not a search and it's trivial and
doesn't require solving SAT (it is just evaluating a boolean formula
for a given set of values, not trying to find values that satisfy it).

--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/golang/go/issues/24301#issuecomment-392595150

I quoted that sentence not because i think it is indicative of people having accepted search as always necessary, but because if we believe that reporting those such conditions is important, then it is because we believe it is likely we will encounter such scenarios.

To be clear: The suggestion of retrofitting upper bounds in this way is purely reactive to concerns brought up and to show that it can be done (to critically question the claim that MVS is fundamentally unfit for purpose). It seems a bit unfair to take that concession and willingness to compromise as proof that we think you were right all along.

To me, that claim (that MVS is unfit and an essentially irreversible step in the wrong direction) is what I am personally challenging and the lens I am reading your arguments through. One of those arguments was, that it's a feature if we can declare incompatibilities and have the version selection algorithm fail when they are encountered. Another fair argument is, that if they occur, it would be nice to have the algorithm be able to solve them for us (which would indeed require a SAT solver).

However, while I think those are valid and fair concerns, I don't believe they pass the bar of proving to me that MVS is fundamentally unfit. I still believe MVS as a starting point brings good and important features to the table. And that if those concerns turn out to cause significant pain in practice, there are still lots of ways we can iterate on that - from adding upper bounds (whether as part of go.mod or as a separate service) with pure failures up to and including adding a full SAT solver and lock files at some point. I.e. I agree with you that those things will happen, but a) am (maybe naively) optimistic that they won't cause as much pain as you are anticipating and b) that they are solvable problems even when we start off with MVS.

It occurs to me that something outside of source control determining compatibility would change the determinism of an MVS system. If you have foo >= 1.5.0 as a constraint in one lib, and another lib has foo >= 1.6.0. Then put those two in a binary and they choose 1.6.0. In MVS this is all you need for a repeatable build. it'll always choose 1.6

But if you add external compatibility to the mix, then you could update that first library to say it's not compatible with 1.6, and then the algorithm would choose 1.7, even though the code hasn't changed... Which means you'd need a lock file again.

For reference, I don't think a lock file is a bad thing. It's nice to have an explicit list of exactly what you need to build. And that should make it fast. No magic logic needed.

@natefinch If the application's go.mod file was updated to require v1.7.0 because the external compatibility tool indicated v1.6.0 was incompatible, you wouldn't need a lock file. Because specifying v1.7.0 lives in the go.mod file the author could also add a comment saying why v1.7.0 is being used and that information would be useful to readers.

@leighmcculloch , if any files in app are updated then it is a different build and totally outside the scope of "reproducible build without lockfile" problem.

out-of-band compatibility information is proposed to reflect how knowledge develops: no incompatibilities were known at release time, but then it became apparent and extra information is published regarding already released versions. IMHO by definition this approach leads to a change how dependencies are pulled, otherwise why have this extract incompatibility information at all?

@redbaron @natefinch

The point of the incompatibility information is for authors of libraries to communicate incompatibility information to their users. Whether that information is used at build or at release time is a different question.

In vgo's case, the current idea is to only display warnings (or potentially croak). But notably not to let it influence the choice of of versions used (as that would require solving SAT). So it actually doesn't matter, you can use it at either or both and it will fulfill its duty just fine, while retaining the property of repeatability¹.

In dep, this information is only used at release time and then recorded to a lock file, which is used at build time. So it seems that we are considering a release-time use "good enough" anyway, at least when it comes to concerns of vgo vs. dep.

I still don't think we have to actually answers those questions right now, though.


[1] Personally, I'd argue that using it at release time and only if -v is used at build time is better, because a user shouldn't have to decide whether a warning is actionable or not.

@rsc wrote:

To follow up, once we have established versions and vgo, we absolutely want to have a new godoc.org (maybe a different name) that records additional information about packages, information that the go command can consult. And some of that information would be pair-wise incompatibility that the go command could report as warnings or errors in any particular build (that is, reporting the damage, not trying to hide it by working around it).

I'm wondering if it is necessary to record pair-wise incompatibility. The way I see it currently is that any incompatibility between module A@vN and module B@vM is really because B made an incompatible change from some version vL where L < M.

If module B did not make an incompatible change, then module A just has a bug. If it did, then the issue is about B itself, not about the pairing of A and B.

So ISTM that any public repository of module metadata can record only incompatibilities of any module with previous versions of itself, which may make the problem more tractable. These incompatibility reports are quite similar to bug reports, although they're not resolvable because once a version is published, it cannot be changed.

When you upgrade your module versions, the go tool could consider the metadata and refuse to consider a version that's incompatible with any currently chosen version. I think this avoids the need to solve SAT. It could also decide that a given module has too many incompatibility reports and refuse to add it as a dependency.

A set of tuples of the form (module, oldVersion, newVersion, description) might be sufficient.

the go tool could consider the metadata and refuse to consider a version that's incompatible with any currently chosen version

Of course, this doesn't work when you're adding several dependencies, which between them end up requiring using mutually incompatible versions, because the new versions aren't part of the existing module, but there might be a reasonable heuristic available. It's not crucial AFAICS, because dependencies should be added relatively rarely.

I worry that go release is becoming this discussion's "sufficiently smart compiler". What can users concretely expect from go release in Go 1.11/12? I think that makes a difference for what reasonable expectations around MVS/SIV are.

Thanks for the energy so many of you have brought to Go and this proposal in particular.

The first goal of the proposal process is to "[m]ake sure that proposals get a proper, fair, timely, recorded evaluation with a clear answer." This proposal was discussed at length and we published a summary of the discussion. After six weeks and much discussion, the proposal review committee - stepping in as arbiter because I wrote the proposal - accepted the proposal.

A single GitHub issue is a difficult place to have a wide-ranging discussion, because GitHub has no threading for different strands of the conversation and doesn't even display all the comments anymore. The only way a discussion like this works at all is by active curation of the discussion summary. Even the summary had gotten unwieldy by the time the proposal was accepted.

Now that the proposal is accepted, this issue is no longer the right place for discussion, and we're no longer updating the summary. Instead, please file new, targeted issues about problems you are having or concrete suggestions for changes, so that we can have focused discussions about each specific topic. Please prefix these new issues with “x/vgo:”. If you mention #24301 in the text of the new issue, then it will be cross-referenced here for others to find.

One last point is that accepting the proposal means accepting the idea, not the prototype implementation bugs and all. There are still details to work out and bugs to fix, and we'll continue to do that together.

Thanks again for all your help.

There's more work to be done (see the modules label) but the initial module implementation as proposed in this issue has been committed to the main tree, so I am closing this issue.

Was this page helpful?
0 / 5 - 0 ratings