https://github.com/gogo/protobuf is not mantained any more https://github.com/gogo/protobuf/issues/691 (currently)
It's our dependency, which is incompatible with new golang/protobuf
version, which more and more packages depend on, hence we need to replace the golang/protobuf
version, depending on outdated versions of our direct dependencies and potentially even breaking packages this way
gogo/protobuf
dependency
Figure this out
Figure out if a new maintainer will appear or different plugin with feature parity?
Use just vanilla protobuf?
tests
yes
Validation plugin we're using dropped support for GoGo
https://github.com/envoyproxy/protoc-gen-validate/pull/340
I think the best way forward is to follow the ecosystem and migrate away from gogo/protobuf. With more and more of our other dependencies moving away from gogo, I think it will become increasingly difficult to keep using it. Of course it's going to be a lot of work to migrate, so if we do it, we need to come up with a good plan.
@rvolosatovs probably knows more about the custom options that are set in our gogottn
generator, but here's what I found for the explicit options in our proto files:
gogoproto.customname
, gogoproto.stdtime
and gogoproto.stdduration
, and goproto_enum_prefix
options. Those are relatively easy to remove, since the Go compiler will immediately complain about any resulting issues.gogoproto.embed
option would mean that we can no longer access embedded fields (the Go compiler will help us find those), and that messages no longer satisfy some interfaces (this may be more difficult).gogoproto.nullable
option will be much more work, because we will have to start using the getters, and add nil-checks. Resulting problems may not get caught by the Go compiler. Possible workaround would be to temporarily make those fields private, then rewrite to getters/setters and finally making the fields public again.gogoproto.customtype
and enums that use gogoproto.enum_stringer
options. For those we've often changed the way they're marshaled/unmarshaled to JSON. For the custom bytes
fields such as EUI, DevAddr etc. we could change the type (in the proto messages) to string
(which is binary compatible). With the enums I'm afraid it's going to be breaking the JSON API, since those are now accepted (by UnmarshalJSON) as both strings and ints.Maybe this is also a good time to start thinking about our v4 API, because I can imagine we might discover some more (API breaking) surprises.
I think the best way forward would be to first try out https://github.com/alta/protopatch. Depending on the result:
api
directory)protopatch
if it's a low-effort feature. This really depends on the option, though - if we're talking about customtype
- that IMO definitely justifies contributing, but maybe something like stdtime
- not so much.Looking forward, I don't think we should be directly using vanilla protobuf protos in components internally at runtime(given the provided feature set of protobuf today).
It only makes sense to use protobuf for (de-)serialization, so for storage and on API layer. Internally, however, using plain vanilla generated Go protos makes no sense to me.
So, for example, NS:
*ttnpb.EndDevice
(vanilla generated Go type) from the registry, deserialized from stored binary data*ttnpb.EndDevice
into T_device
, (NOTE: perhaps that could be just a wrapper initially or forever)T_device
internally in NST_device
into *ttnpb.EndDevice
(NOTE: this could be a trivial, very fast task if we're using a wrapper, since we only need to modify changed fields and that could even be performed on binary data directly)*ttnpb.EndDevice
, serialize into binary dataRefs also https://github.com/TheThingsNetwork/lorawan-stack/issues/342 (generated populators)
I'm not in favor of a (smaller) alternative to gogo. It feels like pushing the can. Let's keep things as vanilla as possible, especially when we need to decide again what the best way forward is.
I do agree that we can consider using intermediary types in some places, instead of relying everywhere on the generated protos. It's basically separating data transfer objects (DTOs: protos, also for storage) from data access objects (DAOs: how we use them). If that is primarily reading, we can also declare interfaces and see how far we get with that.
That said, I wouldn't go as far as changing the entire NS to using T_device
, but rather specific structs and/or interfaces as needed.
Let's move this discussion to November
@rvolosatovs what is your objection against moving to vanilla with a custom JSON marshaler?
The huge migration burden and loads of boilerplate if we end up just using vanilla protos directly.
I don't really object to that though, I just think we should try to find a simple non-intrusive alternative first and if that's not possible, then resort to reworking this whole thing.
I'm afraid that any plugin that we start to rely on will end up in an unmaintained state at some point. Generally speaking, I'm in favor of keeping things as close to vanilla as possible. If that means nil
checking more often than we like, then so be it. It can also work in our favor that we know that things are not set, instead of an initialized struct.
I'm afraid that refactoring our entire codebase is going to be a pain no matter how we do it. Our (gogo-generated) proto structs are used everywhere right now (gRPC API, HTTP API, events, errors, internally, Redis DB, ...), so changing to something else (whatever that other thing is) is going to touch pretty much everything in our codebase, and the way it looks now, all at the same time.
The hard requirement is that we don't break compatibility of our v3 API. Even if we decide to use this situation as the moment to start working on a v4 API (at least internally), we will still have to keep supporting that v3 API for existing users.
In the long term, I think we would do ourselves a big favor by decoupling our (versioned, stable-within-major) external APIs from our internal (unversioned, stable-within-minor) API and our (versioned, stable) database documents. We could then write or generate functions to convert between our internal APIs and the others.
But I think there are some steps that we can already take now:
In order to keep our v3 JSON API compatible, I think our first TODO is to work on generating JSON marshalers and unmarshalers that understand how to marshal/unmarshal our custom types. I think doing this is smart anyway because there is no stability promise for Go's implementation of the JSON format for protocol buffers, so we better have control over that ourselves. Doing this could also allow us to consider field masks when marshaling to JSON. In the grpc-gateway
runtime we can register codecs, so we can just write a codec that calls our own (generated) (un)marshalers instead of {gogo,golang}/protobuf's jsonpb.
I already tried that here: https://github.com/TheThingsNetwork/lorawan-stack/commit/a41f62d98ae7ee719b576e6fcd2009a79cd38f4c
This does make protobuf complain about the types registry, so we may need to remove golang_proto.RegisterType
from our old protos to make this work. Removing that could potentially break resolving of google.protobuf.Any
, but we only use those in errors and events, so we can probably find workarounds for those specific cases.
This is for the transition period only, but for the long term solution, we'd want to generate similar converters.
I already tried that with a simple service here: https://github.com/TheThingsNetwork/lorawan-stack/commit/cd7d75c8b42ad15eee1ac594ff6d0f2d5a75eb67, but for more complicated services we'd definitely need those converters.
Note that this only changes the grpc service itself. The grpc-gateway still uses the old gogo stuff on the JSON side, and then calls the internal gRPC server, which then runs the new implementation.
Pushed some initial dependency updates and backwards compatibility work-arounds here: https://github.com/TheThingsNetwork/lorawan-stack/compare/issue/2798-codec
More and more of our dependencies are upgrading to protobuf 1.4 and the V2 API, and the longer we keep this open, the more problems we'll have when trying to upgrade our dependencies.
We should really give this some more priority and make a decision on what we're going to do about all this.
Please plan a call for next week so we can discuss offline.
I think we should go through this pain process and concentrate on solving this in a week or two. And to avoid that we do other things as this is going to cause lots of conflicts otherwise. Having as many hands as possible requires knowing exactly what we are going to do in which cases, dividing tasks as much as possible and keeping eyes on the prize.
Next steps:
unconvert
, gofumpt
and whatever else we're doing on top of protocprotoc-gen-gogottn
to protoc-gen-gofast
(or whatever is closests to vanilla) (gogoproto.*)
options in our proto files, so that they render the same as nowgopls
and rf
can help with this.(gogoproto.*)
options one by one and update code that uses it. Perhaps tools like gopls
and rf
can help with this.gogoproto.populate
and updating tests (https://github.com/TheThingsNetwork/lorawan-stack/issues/342)gogoproto.customname
and changing EUI -> Eui
etc.gogoproto.embed
. We do need to make sure that messages still implement interfaces such as ValidateContext(context.Context) error
and ExtractRequestFields(m map[string]interface{})
.gogoproto.nullable
and making sure we use Getters where possible, and do nil checks otherwise.@rvolosatovs let's try to make those first steps for v3.11.3. When that's done please re-add the other assignees, and let's discuss again.
Most helpful comment
Please plan a call for next week so we can discuss offline.
I think we should go through this pain process and concentrate on solving this in a week or two. And to avoid that we do other things as this is going to cause lots of conflicts otherwise. Having as many hands as possible requires knowing exactly what we are going to do in which cases, dividing tasks as much as possible and keeping eyes on the prize.