Numpy: Request: use semantic versioning

Created on 4 Dec 2017  ·  88Comments  ·  Source: numpy/numpy

Semantic versioning is a widely used convention in software development, distribution, and deployment. In spite of a long-lasting discussion about its appropriateness (Google knows where to find it), it is today the default. Projects that consciously decide not to use semantic versioning tend to choose release numbering schemes that make this immediately clear, such as using dates instead of versions.

NumPy is one of rare examples of widely used software that uses a version numbering scheme that looks like semantic versioning but isn't, because breaking changes are regularly introduced with a change only in the minor version number. This practice creates false expectations among software developers, software users, and managers of software distributions.

This is all the more important because NumPy is infrastructure software in the same way as operating systems or compilers. Most people who use NumPy (as developers or software users) get and update NumPy indirectly through software distributions like Anaconda or Debian. Often it is a systems administrator who makes the update decision. Neither the people initiating updates nor the people potentially affected by breaking changes follow the NumPy mailing list, and most of them do not even read the release notes.

I therefore propose that NumPy adopt the semantic versioning conventions for future releases. If there are good reasons for not adopting this convention, NumPy should adopt a release labelling scheme that cannot be mistaken for semantic versioning.

15 - Discussion

Most helpful comment

The current NumPy core team cares more about progress (into a direction that matters for some fields but is largely irrelevant to others) than about stability.

I'm sorry, but this just shows you haven't been following NumPy development at all in the last few years, or have a very particular set of glasses. NumPy is actually a very difficult project to contribute to, because there's a lot of concern for backwards compatibility. That's one of the main reasons we have a hard time attracting new maintainers.

All 88 comments

Could numpy be considered to be using semantic versioning, but with a leading 1?

Note that almost every core scientific Python project does what NumPy does: remove deprecated code after a couple of release unless that's very disruptive, and only bump the major version number for, well, major things.

Not sure if you're proposing a change to the deprecation policy, or if you think we should be at version 14.0.0 instead of 1.14.09 now.

The latter: NumPy should be roughly at version 14 by now. But I propose to adopt this convention only for future releases.

BTW: NumPy's predecessor, Numeric, did use semantic versioning and got to version 24 over roughly a decade. I don't know why this was changed in the transition to NumPy.

My impression is that the vast majority of Python projects do not use semantic versioning. For example, Python itself does not use semantic versioning. (I'm also not aware of any mainstream operating systems or compilers that use semver -- do you have some in mind?) I agree that semver proponents have done a great job of marketing it, leading many developers into thinking that it's a good idea, but AFAICT it's essentially unworkable in the real world for any project larger than left-pad, and I strongly dispute the idea that the semver folks now "own" the traditional MAJOR.MINOR.MICRO format and everyone else has to switch to something else.

Can you give an example of what you mean by a "release labelling scheme that cannot be mistaken for semantic versioning"? Using names instead of numbers? You cite date-based versioning, but the most common scheme for this that I've seen is the one used by e.g. Twisted and PyOpenSSL, which are currently at 17.9.0 and 17.5.0, respectively. Those look like totally plausible semver versions to me...

And can you elaborate on what benefit this would have to users? In this hypothetical future, every release would have some breaking changes that are irrelevant to the majority of users, just like now. What useful information would we be conveying by bumping the major number every few months? "This probably breaks someone, but probably doesn't break you"? Should we also bump the major version on bugfix releases, given the historical inevitability that a large proportion of them will break at least 1 person's code? Can you give any examples of "software developers, software users, and managers of software distributions" who have actually been confused?

Note that the mailing list is a more appropriate venue for this discussion, and probably we would have to have a discussion there before actually making any change, but the comments here should be useful in getting a sense of what kind of issues you'd want to address in that discussion.

@njsmith It seems that the only factual point we disagree on is whether or not semantic versioning is the default assumption today. This requires a clearer definition of the community in which it is (or not) the default. The levels of software management I care about is distribution managament and systems administration, which is where people decide which version is most appropriate in their context.

The informal inquiry that led to me the conclusion that semantic versioning is the default consisted of talking to administrators of scientific computing installations. I also envisaged
a more empirical approach (listing the packages on a recent Debian installation and picking a few of them randomly to investigate their versioning approach), but this turned out to be very difficult, because few projects clearly state if they use semantic versioning or not.

A comment from one systems administrator particularly struck me as relevant: he said that for the purposes of deciding which version to install, any convention other than semantic versioning is useless. Systems administrators can neither explore each package in detail (they lack the time and the competence) nor consult all their users (too many of them). They have to adopt a uniform policy, and this tends to be based on the assumption of semantic versioning. For example, an administrator of a computing cluster told me that he checks with a few "power users" he knows personally before applying an update with a change in the major version number.

As for examples of people who have actually been confused, specifically concerning scientific Python users, I have plenty of them: colleagues at work, people I meet at conferences, people who ask for advice by e-mail, students in my classes. This typically starts with "I know you are a Python expert, can you help me with a problem?" That problem turns out to be a script that works on one computer but not on another. Most of these people don't consider dependency issues at all, but a few did actually compare the version numbers of the two installations, finding only "small differences".

As @eric-wieser and @rgommers noted, my request is almost synonymous to requesting that the initial "1." be dropped from NumPy versions. In other words, NumPy de facto already uses semantic versioning, even though it is not the result of a policy decision and therefore probably not done rigorously. However, it does suggest that NumPy could adopt semantic versioning with almost no change to the current development workflow.

A comment from one systems administrator particularly struck me as relevant: he said that for the purposes of deciding which version to install, any convention other than semantic versioning is useless.

Unfortunately, semantic versioning is also useless for this :-(. I don't mean to split hairs or exaggerate; I totally get that it's a real problem. But just because a problem is real doesn't mean that it has a solution. You fundamentally cannot boil down the question "should I upgrade this software?" to a simple mechanical check. It's a fantasy. Projects that use semver regularly make major releases that all their users ought to immediately upgrade to, and regularly make breaking changes in minor releases.

Systems administrators can neither explore each package in detail (they lack the time and the competence) nor consult all their users (too many of them). They have to adopt a uniform policy, and this tends to be based on the assumption of semantic versioning. For example, an administrator of a computing cluster told me that he checks with a few "power users" he knows personally before applying an update with a change in the major version number.

I like this part though :-). I doubt we'll agree about the philosophy of semver, but it's much easier to have a discussion about the concrete effects of different versioning schemes, and which outcome we find most desirable.

I don't think the concept of semver has much to do with this policy -- does the system admin you talked to actually check every project to see if they're using semver? Most projects don't, as you said, it's hard to even tell which ones do. And the policy is the same one that sysadmins have been using since long before semver even existed. I think a better characterization of this policy would be: "follow the project's recommendation about how careful to be with an upgrade", along with the ancient tradition that major releases are "big" and minor releases are "little".

The NumPy project's recommendation is that system administrators should upgrade to new feature releases, so what I take from this anecdote is that our current numbering scheme is accurately communicating what we want it to, and that switching to semver would not...

@njsmith OK, let's turn away from philosophy and towards practicalities: What is the role of software version numbers in the communication between software developers, system maintainers, and software users?

Again it seems that we have a major difference of opinion here. For you, it's the developers who give instructions to system maintainers and users, and use the version numbers to convey their instructions. For me, every player should decide according to his/her criteria, and the version number should act as means of factual communication at the coarsest level.

Given that NumPy has no security implications, I don't see how and why the NumPy project should give universal recommendations. People and institutions have different needs. That's why we have both ArchLinux and CentOS, with very different updating policies.

@khinsen The oldnumeric package still works perfectly, and people can install it with:

pip install oldnumeric

Perhaps this could be your proposed "stable numpy," where the interface to numpy is restricted to Python/Cython and nothing is ever changed. Of course, writing code with oldnumeric is very arcane, but you can't have it both ways.

@xoviat True, but that's a different issue. My point here is not software preservation, but communication between the different players in software management.

Question: As a systems administrator (even just on your personal machine), would you expect a package to drop a complete API layer from version 1.8 to version 1.9?

For those who replied "yes", second question: can you name any software other than numpy that ever did this?

BTW, I can assure you that many people were bitten by this, because I got a lot of mails asking me why MMTK stopped working from one day to the next. All these people had done routine updates of their software installations, without expecting any serious consequences.

But dropping oldnumeric was not the worst event in recent NumPy history. That honor goes to changing the copy/view semantics of some operations such as diagonal. Code that returns different results depending on the NumPy version (minor version number change!) is a real nightmare.

BTW, since hardly anyone knows the story: pip install oldnumeric works since two days ago, because @xoviat prepared this add-on package and put it on PyPI. Thanks a lot!

would you expect a package to drop a complete API layer from version 1.8 to version 1.9?

Which layer are you referring to?

can you name any software other than numpy that ever did this?

SciPy dropped weave and maxentropy packages, pandas breaks major features regularly. I'm sure there are many more prominent examples. EDIT: Python itself for example, see https://docs.python.org/3/whatsnew/3.6.html#removed

BTW, I can assure you that many people were bitten by this, because I got a lot of mails asking me why MMTK stopped working from one day to the next. All these people had done routine updates of their software installations, without expecting any serious consequences.

That change was about 10 years in the making, and there is no way that a different versioning scheme would have made a difference here.

Dropping deprecated features is a tradeoff between breaking a small fraction of (older) code, and keeping the codebase easy to maintain. Overall, if we're erring, then we're likely doing that on the being conservative side. As someone who also has had to deal with many years old large corporate code bases that use numpy I feel your pain, but you're arguing for something that is absolutely not a solution (and in general there is no full solution; educating users about things like pinning versions and checking for deprecation warnings is the best we can do).

Which layer are you referring to?

numeric/numarray support I assume

@rgommers Sorry, I should have said "another example outside the SciPy ecosystem".

Also, I am not complaining about dropping the support for oldnumeric. I am complaining about doing this without a change in the major version number.

What difference would that have made? It would have made people hesistate to update without reading the release notes. Everyone using (but not developing) Python code would have taken this as a sign to be careful.

Don't forget that the SciPy ecosystem has an enormous number of low-profile users who are not actively following developments. Python and NumPy are infrastructure items of the same nature as ls and gcc for them. And often it's less than that: they use some software that happens to be written in Python and just happens to depend on NumPy, and when it breaks they are completely lost.

@rgommers Sorry, I should have said "another example outside the SciPy ecosystem".

Just edited my reply with a link to the Python release notes, that's outside the SciPy ecosystem.

What difference would that have made? It would have made people hesistate to update without reading the release notes. Everyone using (but not developing) Python code would have taken this as a sign to be careful.

This will simply not be the case. If instead of 1.12, 1.13, 1.14, etc we have 12.0, 13.0, 14.0 then users get used to that and will use the same upgrade strategy as before. The vast majority will not all of sudden become much more conservative.

Don't forget that the SciPy ecosystem has an enormous number of low-profile users who are not actively following developments. Python and NumPy are infrastructure items of the same nature as ls and gcc for them. And often it's less than that: they use some software that happens to be written in Python and just happens to depend on NumPy, and when it breaks they are completely lost.

All true, and all not magically fixable by a version number. If they ran pip install --upgrade numpy, they have to know what they're doing (and this is anyway not even showing a version number). If it's their packaging system, then they're seeing the problem of the software that breaks not having a decent test suite (or that wasn't run).

Other downsides of changing the versioning scheme now:

  • we would be making a change in versioning without a change in maintenance policy's, will be confusing rather than helpful
  • we're now basically following Python's lead and doing the same as the rest of the whole ecosystem. that is a good thing
  • maybe most importantly: we would be losing the ability to signal actually major changes. the kind we would be going to 2.x for, like a release that would break the ABI.

My baseline reference is not Python, but a typical software installation. As I said, for many (perhaps most) users, NumPy is infrastructure like gnu-coreutils or gcc. They do not interpret version numbers specifically in the context of the SciPy ecosystem.

I did a quick check on a Debian 9 system with about 300 installed packages. 85% of them have a version number starting with an integer followed by a dot. The most common integer prefixes are 1 (30%), 2 (26%), 0 (14%) and 3 (13%). If NumPy adopted a version numbering scheme conforming to common expectations (i.e. semantic versioning or a close approximation), it definitely would stand out and be treated with caution.

Note also that the only updates in Debian-installed software that ever broke things for me were in the SciPy ecosystem, with the sole exception of an Emacs update that brought changes in org-mode which broke a home-made org-mode extension. The overall low version number prefixes thus do seem to indicate that most widely used software is much more stable than NumPy and friends.

Uniformity across the SciPy ecosystem is indeed important, but I would prefer that the whole ecosystem adopt a versioning scheme conforming to the outside world's expectations. I am merely starting with NumPy because I see it as the most basic part. It's even more infrastructure than anything else.

Finally, I consider a change in a function's semantics a much more important change than a change in the ABI. The former can cause debugging nightmares for hundreds of users, and make programs produce undetected wrong results for years. The latter leads to error messages that clearly indicate the need to fix something.

According to those standards, NumPy is not even following Python's lead, because the only changes in semantics I am aware of in the Python language happened from 2 to 3.

Finally, I consider a change in a function's semantics a much more important change than a change in the ABI. The former can cause debugging nightmares for hundreds of users, and make programs produce undetected wrong results for years. The latter leads to error messages that clearly indicate the need to fix something.

This we try really hard not to do. Clear breakage when some feature is removed can happen, silently changing numerical results should not. That's one thing we learned from the diagonal view change - that was a mistake in hindsight.

it definitely would stand out and be treated with caution.

I still disagree. Even on Debian, which is definitely not "a typical software installation" for our user base (that'd be something like Anaconda on Windows). You also seem to ignore my argument above that a user doesn't even get to see a version number normally (neither with pip install --upgrade or with a package manager).

Also, your experience that everything else never breaks is likely because you're using things like OS utilities and GUI programs, not other large dependency chains. E.g. the whole JavaScript/NodeJS ecosystem is probably more fragile than the Python one.

BTW, I can assure you that many people were bitten by this, because I got a lot of mails asking me why MMTK stopped working from one day to the next

This is a good example of the subtleties here. As far as I know, MMTK and your other projects are the only ones still extant that were affected by the removal of the numeric/numarray compatibility code. How many users would you estimate you have? 100? 1000? NumPy has millions, so maybe 0.1% of our users were affected by this removal? This is definitely not zero, and the fact that it's small doesn't mean that it doesn't matter – I wish we could support 100% of users forever in all ways. And I understand that it's particularly painful for you, receiving 100% of the complaints from your users.

But if we bump our major version number for this, it means to 99.9% of our users, we've just cried wolf. It's a false positive. OTOH for that 0.1% of users, it was really important. Yet it's not uncommon that we break more than 0.1% of users in micro releases, despite our best efforts. So what do we do?

It's simply not possible to communicate these nuances through the blunt instrument of a version number. Everyone wants a quick way to tell whether an upgrade will break their code, for good reasons. Semver is popular because it promises to do that. It's popular for the same reason that it's popular to think that fad diets can cure cancer. I wish semver lived up to its promises too. But it doesn't, and if we want to be good engineers we need to deal with the complexities of that reality.

I don't see how and why the NumPy project should give universal recommendations. People and institutions have different needs.

We give universal recommendations because we only have 1 version number, so by definition whatever we do with it is a universal recommendation. That's not something we have any control over.

That honor goes to changing the copy/view semantics of some operations such as diagonal.

IIRC we have literally not received a single complaint about this from someone saying that it broke their code. (Maybe one person?) I'm not saying that means no-one was affected, obviously the people who complain about a change are in general only a small fraction of those affected, but if you use complaints as a rough proxy for real-world impact then I don't think this makes the top 50.

And BTW I'm pretty sure if you go searching through deep history you can find far more egregious changes than that :-).

Note also that the only updates in Debian-installed software that ever broke things for me were in the SciPy ecosystem, with the sole exception of an Emacs update that brought changes in org-mode which broke a home-made org-mode extension.

Respectfully, I think this says more about how you use NumPy vs Debian than it does about NumPy versus Debian. I love Debian, I've used it for almost 20 years now, and I can't count how many times it's broken things. Just in the last week, some bizarre issue with the new gnome broke my login scripts and some other upgrade broke my trackpoint. (Both are fixed now, but still.) I'll also note that Debian's emacs was set up to download and run code over unencrypted/insecure channels for years, because of backwards compatibility concerns about enabling security checks. I don't think there's such a thing as a gcc release that doesn't break a few people, if only because people do things like use -Werror and then minor changes in the warning behavior (which can rely on subtle interactions between optimization passes etc.) become breaking changes.

The overall low version number prefixes thus do seem to indicate that most widely used software is much more stable than NumPy and friends.

The overall low version number prefixes are because most widely used software does not use semver.

Finally, I consider a change in a function's semantics a much more important change than a change in the ABI. The former can cause debugging nightmares for hundreds of users, and make programs produce undetected wrong results for years. The latter leads to error messages that clearly indicate the need to fix something.

Yes, that's why we're extremely wary of such changes.

There is some disconnect in perspectives here: you seem to think that we change things willy-nilly all the time, don't care about backwards compatibility, etc. I can respect that; I understand it reflects your experience. But our experience is that we put extreme care into such changes, and I would say that when I talk to users, it's ~5% who have your perspective, and ~95% who feel that numpy is either doing a good job at stability, or that it's doing too good a job and should be more willing to break things. Perhaps you can take comfort in knowing that even if we disappoint you, we are also disappointing that last group :-).

with the sole exception of an Emacs update

Well, to go off topic, that does serve as an example of the other side of stability. Emacs was static for years due to Stallman's resistance to change, and that resulted in the xEmacs fork. My own path went Emacs -> xEmacs, to heck with it, -> Vim ;) Premature fossilization is also why I stopped using Debian back in the day. For some things, change simply isn't needed or even wanted, and I expect there are people running ancient versions of BSD on old hardware hidden away in a closet. But I don't expect there are many such places.

Apropos the current problem, I don't think a change in the versioning scheme would really make any difference. A more productive path might be to address the modernization problem. @khinsen Do you see your way to accepting updating of your main projects? If so, I think we should explore ways in which we can help you do it.

I am attempting to update the projects at
https://github.com/ScientificPython. It requires updating Python code that
used the old C API (and I mean old; some functions such as Py_PROTO were
from 2000). PRs are of course welcome, but I'm not sure whether anyone
wants to spend their time on that.

The bigger issue that I think he brought up is that there are "many
projects" (I don't know where exactly they are because all the projects
that I've seen support Python 3) that also need updating; how is it
determined which projects are allocated NumPy developer time? And also I
don't think his central claim was invalid: SciPy greatly benefits from the
fact that it could simply copy and paste old fortran projects (such as
fftpack) with little or no modification. If these had been written in say
"fortran 2" and new compilers only compiled "fortan 3," there would have
been significant issues.

That said, these issues aren't really NumPy's fault. Despite what he has
said, with NumPy 1.13 installed, oldnumeric still passed all of the tests,
indicating the NumPy is not the culprit here. Since the oldnumeric API is
literally over a decade old (maybe approaching two decades!), and it still
works on the latest NumPy, I think that the NumPy API is probably stable
enough.

@charris I fully agree with you that "never change anything" is not a productive attitude in computing.

My point is that the SciPy ecosystem has become so immensely popular that no single approach to managing change can suit everyone. It depends on how quickly methods and their implementations evolve in a given field, on the technical competences of practitioners, on other software they depend on, on the resources they can invest into code, etc.

The current NumPy core team cares more about progress (into a direction that matters for some fields but is largely irrelevant to others) than about stability. That is fine - in the Open Source world, the people who do the work decide what they want to work on. However, my impression is that they do not realize that lots of people whose work depend on NumPy have different needs, feel abandoned by the development team, and are starting to move away from SciPy towards more traditional and stable technology such as C and Fortran (and, in one case I know, even to Matlab).

I have no idea what percentage of NumPy users are sufficiently unhappy with the current state of affairs, and I don't think anyone else has. Once a software package becomes infrastructure, you cannot easily estimate who depends on it. Many who do are not even aware of it, and much code that depends on NumPy (directly or indirectly) is not public and/or not easily discoverable.

If we want to keep everyone happy in the SciPy community, we need to find a way to deal with diverse needs. The very first step, in my opinion, is to shift the control over the rate of change in a specific installation from the developers to someone who is closer to the end user. That could be the end users themselves, or systems administrators, or packagers, or whoever else - again I don't think there is a universal answer to this question. What this requires from the developers is information at the right level, and that is why I started this thread. Of course version numbers cannot save the world, but I see them as a first step to establishing a distributed responsability for change management.

Finally, some of you seem to believe that I am fighting a personal battle about my own code. It may surprise you that my personal attitude is not the one I am defending here. My own sweetspot for rate of change is somewhere in between of what is common in my field and what sees to be prevalent in the NumPy team. Most of my work today uses Python 3 and NumPy > 1.10. MMTK is 20 years old and I do many things differently today. Quite often I take pieces of code from MMTK I need for a specific project and adapt them to "modern SciPy", but that's something I can do with confidence only because I wrote the original code.

I have been maintaining a stable MMTK as a service to the community, not for my own use, which explains why I have been doing maintenance in a minimalistic way, avoiding large-scale changes in the codebase. Both funding for software and domain-competent developers are very hard to find, so MMTK has always remained a one-maintainer-plus-occasional-contributors project. I am not even sure that porting all of MMTK to "modern SciPy" would do anyone any good, because much of the code that depends on MMTK is completely unmaintained. But then, that's true for most of the Python code I see around me, even code completely unrelated to MMTK. It's the reality of a domain of research where experiments rather than computation and coding are in the focus of attention.

@xoviat The number of test in oldnumeric is ridculously small. I wouldn't conclude much from the fact that they pass with NumPy 1.13.

The C extension modules that you have been looking at is literally 20 years old and was written for Python 1.4. Back then, it was among the most sophisticated examples of Python-C combos and in fact shaped the early development of Numeric (pre-NumPy) and even CPython itself: CObjects (pre-Capsules) were introduced based on the needs of ScientificPython and MMTK.

I am the first to say that today's APIs and support tools are much better, and I expect they will still improve in the future. But some people simply want to use software for doing research, no matter how old-fashioned it is, and I think they have a right to exist as well.

@rgommers I am not ignoring your argument that a user doesn't even get to see a version number. It's simply not true for the environments I see people use all around me. The people who decide about updates (which are not always end users) do see it. They don't just do "pip install --upgrade" once a week. They would even consider this a careless attitude.

If people around mainly use use Anaconda under Windows, that just confirms that we work in very different environments. In the age of diversity, I hope we can agree that each community may adopt the tools and conventions that work well for it.

And yes, NodeJS is worse, I agree. Fortunately, I can easily ignore it.

Just got an e-mail from a colleague who follows this thread but wouldn't dare to chime in. With an excellent analogy:

"I love it when I get the chance to buy a new microscope and do better science with it. But I would hate to see someone replacing my microscope overnight without consulting with me."

It's all about having control over one's tools.

I promise I will never break into your colleague's lab in the middle of the night and upgrade their numpy. I don't even own a balaclava.

The people who decide about updates (which are not always end users) do see it. They don't just do "pip install --upgrade" once a week. They would even consider this a careless attitude.

If they're sysadmins and understand the pros and cons of various install methods, then they really should also understand (or be taught) how versioning in the Python world (and many many other software projects that are also not using strict semver) works.

The current NumPy core team cares more about progress (into a direction that matters for some fields but is largely irrelevant to others) than about stability.

I'm sorry, but this just shows you haven't been following NumPy development at all in the last few years, or have a very particular set of glasses. NumPy is actually a very difficult project to contribute to, because there's a lot of concern for backwards compatibility. That's one of the main reasons we have a hard time attracting new maintainers.

and, in one case I know, even to Matlab

Matlab was notorious for breaking compatibility. The first thing cooperative projects using Matlab did was specify the version everyone was required to use, same with Microsoft Word if it was being used for documentation. I know folks who switched to NumPy precisely for the improved compatibility. Matlab has its virtues, but compatibility isn't/wasn't one of them. Perhaps things have changed?

However, I think there are a couple of things we can do going forward that might help with compatibility. The first ties into the current discussion of NEPs. Now that NumPy is more mature it might be a good idea to make more use of NEPs when changes that affect compatibility are being proposed, especially if the NEPs are more public and searchable. Second, we could attempt to put up wheels for older NumPy versions on PyPI if that is not too much work. The use of virtual environments seems to be the best current idea for obtaining reproducibility, and having a wider selection of wheels for download might help with that.

Second, we could attempt to put up wheels for older NumPy versions on PyPI if that is not too much work.

Thanks to @matthew-brett's efforts, it looks like the current status is that we have Windows wheels back to 1.10, MacOS back to 1.5 (except 1.7 is missing), and Linux back to 1.6.

The MacOS/Linux situation seems pretty reasonable. I guess we could backfill more Windows releases? OTOH pip install on Windows never used to work without heroic efforts, so I'm not sure there's a large audience for this. We already have 2 years worth, and that will grow over time.

Also, the last time we uploaded old wheels, someone got angry at us because their workflow assumed that there were no wheels and it broke backwards compatibility for them :-)

Would love to hear that story.

Not that I really know this stuff well, but I guess we can try to improve things (I am not quite sure what that means though!), the fact is, we need to move forward slowly and except for possibly some errors all releases were meant to break very few peoples code. I think our minor release means "almost all people should update and be able to update without noticing anything", and I frankly believe that is true. (Obviously there are things that effected many people such as integer deprecations, but they do not create wrong results and were long in the making)

I can see that there might have been some changes big enough to warrant incrementing the major version, though frankly am not sure which one that would be. And yes, maybe there is some historical loss of momentum when it comes to major releases.

In any case I also can't say I am a fan of saying (almost) every release is a major release. I get that we might have pissed off people with some changes, but I have taken part of some rather extensive changes and every time after explaining the reasons I have only heard that it is was the right thing to do, and also we have waited for years until these changes took effect.

As for gcc, etc. For example, I do not know a lot about compiling/C++, I have been annoyed by gcc 4.8 or so starting to force me to figure out how to change flags correctly because of some C++11 features or so being expected, which causes very similar reactions to the emails you seem to be getting about numpy, so I am not sure its much different :).

Anyway, I don't want to discuss too much here, I appreciate the feedback on whether we might be too fast or not careful enough, but I have to admit that I also do not quite see that changing the major version will help much with that. Personally I think that 1.13 and 1.6 are at least one major version apart in some sense, but there is no single release in between that I can point to and say: that was a major compatibility break for many users.
I remember reading comments in the code: "Lets do this in Numpy 2 maybe", exactly because of fear of any breaking at all, with that approach I fear numpy would have stalled a lot, and frankly I am a bit proud of being part of what to me at least seemed like a more active phase again which was needed and which would have been hard if we were even more conservative. (I am probably biased since I have not much clue what happened before I came :)).

Sorry, maybe this does not make sense (we just had a christmas party). The semver proposal makes sense, but I have to admit does not feel like a real solution. I can agree with trying to increment major version more aggressively, but I also disagree with calling basically every release a major release. I can also agree with trying to be more conservative in some cases, but I am frankly not quite sure what those are (just count the number of PRs hanging, because nobody is sure if it might break compatibility somewhere ;), I am sure it is a good portion)?

And even if I have read complains, if you bring a bit of bugging and insistence it is not like we will not undo every change if it is reasonably possible or at least delay it for a year or more. I would say it is a part of why we chose a conservative governance model….

So after babbling for a so long:

  • "Almost no ones code is broken" seems maybe OK for a minor version?
  • We have to move forward slowly?
  • Things will break eventually, and maybe we could increment major version sometimes. Possibly even try to do some FutureWarning type changing more in major versions. (I frankly do not believe it will help, but I am willing to try)

Most importantly: I know the frustration, but is there a real solution for us? Do you think if we increment the major version aggressively you will get less emails?

@xoviat are you asking for the story about getting complaints for uploading wheels for old versions? That was #7570.

My favorite definition of semver (that I can no longer find a link to the original posting):

  • major: we broke user code on purpose
  • minor: we broke user code by accident
  • patch: we broke user work-arounds to the last minor release's bugs

which is a bit cheeky, but I think hits on an importing thing, _any_ change in behavior is going to break some user somewhere. This is particularly true with projects that have large API surfaces.

The very first step, in my opinion, is to shift the control over the rate of change in a specific installation from the developers to someone who is closer to the end user.

I think something that is missing from this conversation is that old versions of libraries are always available (on pypi from the source) so if you have code that requires an old version of python / numpy / matplotlib then use the older versions. User-space environments make this not-too-awful to manage.

If you want to use new versions of the libraries then you have to pay the cost of keeping up with the changes. To push on the microscope analogy, you have to do routine maintenance on your microscope or it degrades over time; software is no different.

@njsmith That's pretty funny. IMO #7570 is not a valid issue given that
NumPy complied with the manylinux wheel specification. Generally, wheels
should be uploaded for older versions, assuming that time is free. Given
that time is not free, we could simply note that people can build wheels
for a specific version of NumPy if they want them; submitting a PR to the
numpy-wheels repository. Or not.

@xoviat I mean, if their system broke, it broke; being able to point to a specification doesn't really change that. In general in these discussions I think we should care more about actual effects on end-users than theoretical purity. But OTOH in this case I think it was the right call to keep the wheels up, given that they'd already worked around the problem, the wheels were already uploaded, and as far as we can tell there were a lot more people benefiting than having problems. But it's an interesting reminder of how subtle "backwards incompatibility" can be, and how difficult it is to make generic rules.

@njsmith Your role in the microscope analogy is that of a microscope salesman who approaches my colleague's lab director with an offer for replacing all the lab's microscopes with his latest model, hiding the sentence "incompatible with samples more than 1mm thick" in the fine print of the contract. This makes it very difficult for the lab director to understand that there is a technical point that needs to be discussed with the people who understand those details.

@rgommers I do understand that maintaining NumPy is a chore, and in fact I would handle changes differently mostly for that reason if I were in charge. I'd put the current code in minimal maintenance mode and start on a major redesign in a different namespace, letting old and new coexist indefinitely with interoperability via the buffer interface. And yes, I know this is not a trivial endeavour for lots of reasons, but I think it's the only way to get out of the pile of accumulated technical debt. The main goal of the redesign would be maintainability.

On the other hand, I certainly admit having a very particular set of glasses, but that's true for everyone else in this discussion. My glasses are those of "traditional scientific computing", which is my work environment. The baseline (default expectation) is that updates never break anything intentionally. That's the policy of standardized languages (C, Fortran) but also of the infrastructure libraries (BLAS, LAPACK, MPI, ...). Innovation happens nevertheless (e.g. ATLAS). If you think that's conservative, let me describe what conservative means in my world: never install a version of anything that isn't at least two years old, to make sure that most bugs are known. That's a common policy for supercomputers whose time is very expensive, and whose results can hardly be checked for that reason.

Note that I am not saying that NumPy should adopt my world's default expectations. I am merely saying it should clearly signal that it's different.

@seberg "Almost no ones code is broken" seems fine in theory for a minor version. But once a piece of software acquires infrastructure status (and in my opinion, NumPy is at that level), it is impossible to estimate how many developers and users could be affected. The criterion then invariably becomes "almost no one I can think of is affected", and that's not a good one.

@tacaswell I think the difference between "on purpose" and "by accident" matters a lot in practice. It affects everyone's attitude to a change. Just look at other aspects of life. If the distinction didn't matter, we could drop the word "careful" from the English language.

Well, frankly I think the "major redevelopment" idea is pretty much abandoned right now since it would require either much more development power and then may create a whole different type of mess as well (see py2 and py3 for the first few years)?

Agreed, numpy is infrastructure, but I am not a fan of just acting as if breaking more peoples code is OK/intention by bumping major versions faster is a solution. It feels more like giving up the task of trying our very best to do "almost no ones code gets broken" releases (maybe with the "unless you have not watched warnings for a couple of releases" exception), then actually help with the decision of updating.

So, sometimes we likely should acknowledge that it might not be true, but otherwise I would much prefer to get solutions to make sure that we do not break more. Of course you offered a solution (being very very conservative and starting numpy 2 with major overhaul), but once we admit that this solution is simply not feasible without major funding or so, what else can we do?

Or let me be clear: If you know someone capable of following numpy dev who can keep an eye out for being more conservative when necessary, you know I at least appreciate it. But I personally do not appreciate giving up our progress of at least slowly getting rid of some of the darker corners in numpy to allow future development. At best, we would end up with a dead NumPy and a replacement in a few years, at worst we will just end up being outpaced and downstream being frustrated with not being able to move forward and maybe "replacements" sprouting up, who by nature if being not nearly as mature just make things worse.

I have to agree with @seberg about creating a different namespace. The
entire assumption behind this idea is that the NumPy project has unlimited
talent and resources to maintain a library that works for everyone.
However, that isn't the case. Developers aren't perfect, so they usually
get it wrong the first time. The people writing the original Numeric code
couldn't predict all of the scenarios that it would be used in, and they
couldn't predict the rise of alternative implementations, such as PyPy.

I think that API stability is highly important. I also think that NumPy has
generally got it right. The fact of the matter is that NumPy is already
difficult to reason about (and it should be, given that every last ounce of
performance is important), but creating a different namespace would make it
extremely difficult to keep all of the implications of changing code in
your head. I think it's highly likely that if NumPy did that, there would
be significantly more bugs because of developers not understanding the
ramifications of changing code in one interface on the other.

In summary, I completely understand people's frustrations. But as @njsmith
said, there are no solutions to the problem that will satisfy every user.
There are only solutions that will satisfy most users most of the time. And
the reality is that if NumPy pandered to the minority of users (not meant
in a derogatory way) that demanded API stability over all else, the
NUMFOCUS funding might dry up because it wouldn't be clear what the money
was used for, and then where would we be? Probably in a situation where
MMTK can no longer depend on NumPy, just like the situation where it could
no longer depend on Numeric.

I'd put the current code in minimal maintenance mode and start on a major redesign in a different namespace, letting old and new coexist indefinitely with interoperability via the buffer interface. And yes, I know this is not a trivial endeavour for lots of reasons, but I think it's the only way to get out of the pile of accumulated technical debt. The main goal of the redesign would be maintainability.

I actually do agree with you, but I don't see how it's feasible without a major injection of funding/vision. NumPy constitutes a huge body of work. Maybe @teoliphant and @skrah will pull it off with plures, but it will be an uphill battle.

Given the NumPy we have today, I think we're doing about as well as we can reasonably do.

For those who replied "yes", second question: can you name any software other than numpy that ever did this?

django is another notable piece of software that doesn't use semantic versioning. They use major changes to indicate substantial breaks, but deprecate things in .x changes after a long period of warnings. More-or-less like NumPy.

I actually do agree with you,

@shoyer out of interest, why? How would that not turn into a very painful py3k-like transition to the new code base at some point?

That's the policy of standardized languages (C, Fortran) but also of the infrastructure libraries (BLAS, LAPACK, MPI, ...). Innovation happens nevertheless (e.g. ATLAS).

Innovation at the pace of LAPACK/ATLAS/OpenBLAS is a recipe for NumPy becoming irrelevant a lot quicker than it otherwise would.

Look, it must be clear to you from all the responses that this versioning change is not going to happen, and that's the consensus between the ~7 active core devs plus some devs of major downstream projects. If you need absolute stability, then you're probably best off with just pinning a fixed version on your systems for some years, and educating your sysadmins about that.

How would that not turn into a very painful py3k-like transition to the new code base at some point?

The big difference is that whereas Python 3 is an all/nothing switch (for Python programs), it's easy or at least doable to mix/match different ndarray libraries. The buffer interface means you can transfer data back and forth without copies. If you coerce inputs to your functions with np.asarray() you might not even notice if some library you're working with switches to return arrays of a different type.

This sounds like repeating parts of the numeric/numarray/numpy
experience, which also was not very pleasant (the buffer interface will
help, but I think such transition will still involve manual code
changes, not all of which are trivial). It will also not be possible
for libraries such as Scipy to "upgrade" to the "new array" without
breaking backward compatibility, so the issue just bubbles upward in
the ecosystem, forcing other libraries to make a similar decision to
abandon old namespaces.

If everyone coerced their inputs with np.asarray, then np.matrix wouldn't be a problem :-).

If different array libraries can agree on duck types and we restrict to dtypes representable by the buffer protocol, then it can work. But if the whole point of a rewrite would be to make incompatible interface changes on the array objects and implement new dtypes, ... I really don't see how to make it work. Concrete example: one obvious thing to fix in this kind of rewrite would be the behavior of ndarray.dot on high-dimensional inputs. But if there's a library out there that does def f(a, b): a.dot(b), then creating such a library will potentially break it. It doesn't really matter whether that library is called numpy or not.

And that's before even getting into the general impossibility of rewriting everything in one big bang, sustaining developer attention while we're doing that, and not only getting it right but making so much better that it can convince people to migrate -- all without any incremental feedback from users. I think dynd is an instructive example here.

@rgommers Please read again what I wrote: I do not, repeat not, propose that NumPy should adopt the pace of LAPACK. I propose that it signals clearly to people who have such an expectation (i.e. 80% of the people in my environment) that it doesn't.

@njsmith A major aspect of a redesign as I see it would be to abandon OO. It's not a good approach to structuring code for a single data structure with tons of functions that work on it. Write np.dot(a, b) and the problem you describe evaporates instantly. You can have any number of implementations of namespace.dot you like. Each library can use the one it likes, and they can still interoperate. It's OO that creates a single namespace for methods, and that's a problem.

Yes, that's a major break with Python habits. And yes, it will be tricky to implement operators on top of that. But I think it can be done, and I think it's worth the effort.

Just to show that I can be in favor of breaking things ;-)

@rgommers Please read again what I wrote: I do not, repeat not, propose that NumPy should adopt the pace of LAPACK.

I understand that. The two paragraphs of my reply were not directly related, sorry if that was confusing.

I propose that it signals clearly to people who have such an expectation (i.e. 80% of the people in my environment) that it doesn't.

That's what I was saying, consensus seems to be that your proposal will be rejected. You're going to have to simply request a pinned version to that 80% and explain why that's what you want.

@khinsen OK, then pretend instead my example was the incompatible changes in indexing that we would surely make if it were possible. (Fancy indexing has some extraordinarily confusing corner cases.)

@njsmith Same problem, in a way. Indexing is a method call in Python, so it's OO again.

Side remark: Fancy indexing is the biggest design mistake in NumPy, in my opinion, because it doesn't even have (and never had) an unambiguous specification. It does np.take for integer arguments and np.repeat for boolean arguments. Since booleans are a subtype of integers in Python, this creates an ambiguity for arguments containing only 0s and 1s.

There actually is a relation to the topic of this thread, because this is exactly the kind of design mistake that happens when development moves on too fast.

I discuss fancy indexing in my SciPy courses exclusively to tell people not to use it. There's np.take and np.repeat which work perfectly well and cause no trouble. And if you use them as functions rather than methods, there's no OO issue either. For those who dislike np.repeat because the name doesn't suggest the intention when used with booleans, just introduce an alias: select = np.repeat. Again something made unnecessarily difficult by OO.

Note also that plain indexing is not subject to any such problem. It does what pretty much everyone would expect it to do under all possible circumstances, so it can be implemented in a method.

The thorny issue from my point of view is arithmetic. You do want to write a+b for array addition, rather than np.add(a, b), but there is no universal agreement on what exactly a+b should do, in particular in terms of the result dtype. That was one of the core issues of the Numeric/numarray split, which led to the introduction of new scalar types in NumPy, and those cause their fair share of unpleasant surprises as well. I believe this problem can be solved, but not in side remarks on a GitHub issue discussion.

@rgommers If "requesting a pinned version to that 80%" were possible, I'd have done it long ago. "That 80%" is not an organized community you can talk to. It's a large number of people who share a background culture, but do not interact with each other. Your suggestion is a bit like "request Windows users to switch to Linux" (or vice versa).

This is the point I am trying to make by insisting on NumPy being infrastructure software. For many people it's just one out of hundreds of lego bricks that make up their software installation. They don't specifically care about it, it just needs to be there and "work".

I don't want to derail this too much further, but I have no idea what you're referring to with np.repeat and bool arrays

@eric-wieser repeat 0 times and you remove it from the array, 1 times and it stays. I disagree with teaching this instead of indexing, but whatever (the worst strangeness is gone nowadays, so yeah in numpy a bool is not an int in most cases, accepting that, your are fine now I think, so that is even an incompatibility with lists if you wish to see it as that, but...).

A side point, since this isn't going anywhere anymore anyways :). I actually somewhat hope that slowly fixing stuff in numpy will make it easier to at some point in the future make numpy more replaceable.

Your suggestion is a bit like "request Windows users to switch to Linux" (or vice versa).

Hmm, asking people that are technically competent (I hope ...) to learn how version numbers in the real world actually work is not really anything like a Windows to Linux switch.

This is the point I am trying to make by insisting on NumPy being infrastructure software.

And presumably, if we would make this switch, you'll move on to SciPy because it's the next bit of infrastructure? When does it stop being infrastructure? And why would NumPy and the other bits of infrastructure want to have a completely different versioning scheme from Python itself and the rest of the whole scientific Python ecosystem?

That 80%" is not an organized community you can talk to.

The admins for the supercomputer(s) you use really should talk to each other right? There can't be masses of people running around all updating software on those couple of systems and never talking. I didn't mean you should educate 80% of all sysadmins worldwide, just the ones you need.

@seberg Declaring that lists and arrays are different data types which only share indexing as a common property is a valid point of view to adopt. It would also make the existence of specific NumPy scalars easier to explain. But I haven't seen any presentation of NumPy that takes this point of view.

@rgommers

The admins for the supercomputer(s) you use really should talk to each other right?

No. They don't even know about each others' existence. They work for different organizations in different places, whose only commonality is to have a few users in common.

I didn't mean you should educate 80% of all sysadmins worldwide, just the ones you need.

This isn't about me - I have a solution that works perfectly well for myself: I always install Python plus all libraries from source, in my home directory.

What this is about is people I collaborate with and people who ask me for help (e.g. because they participated in one of my Python courses in the past). They do not have the technical competence to manage their own Python installation and defer to someone else (admin or more experienced colleague).

to learn how version numbers in the real world actually work

In the shared background culture of the people I am thinking about, the real world actually works like semantic versioning, or a close approximation.

In the shared background culture of the people I am thinking about, the real world actually works like semantic versioning, or a close approximation.

That's then because they use a limited number of libraries, mostly slow-moving like LAPACK & co. As @njsmith pointed out, the majority of software has low version numbers because they don't use semantic versioning.

@rgommers They do use mostly slow-moving libraries, though I wouldn't say "a small number".

As @njsmith pointed out, the majority of software has low version numbers because they don't use semantic versioning.

Not in my experience. But then, "majority" probably means "most of those I know", both for you and me, and there is probably little overlap between the packages you use and the packages I use, outside of the SciPy ecosystem.

And presumably, if we would make this switch, you'll move on to SciPy because it's the next bit of infrastructure? When does it stop being infrastructure?

I would indeed prefer that SciPy and all the rest of the fundamentals of the SciPy ecosystem adopt the same principles, but personally I won't invest any effort into arguing for this anywhere else than for NumPy, which is far more widely used than any of the other libraries, and also far more fundamental. NumPy arrays are the central data structure for much scientific software, whereas SciPy is just a huge collection of functions from which any given application uses a small subset.

Note also that SciPy de facto uses semantic versioning, though probably not intentionally, because it only just reached 1.0.

And why would NumPy and the other bits of infrastructure want to have a completely different versioning scheme from Python itself and the rest of the whole scientific Python ecosystem?

The whole SciPy ecosystem should indeed use the same approach, the one which is (as I see it) the dominant one in scientific computing. This doesn't apply to Python and Python libraries of other domains, which have different habits. Web development, for example, is much less conservative than scientific computing. It's also mostly done by different people, who care for different users. The Python language would be the only point of contact.

NumPy arrays are the central data structure for much scientific software, whereas SciPy is just a huge collection of functions from which any given application uses a small subset.

And the central data structure is stable. The large majority of incompatible changes in any given release are corner cases and mostly not in ndarray behavior. See https://github.com/numpy/numpy/blob/master/doc/release/1.13.0-notes.rst#compatibility-notes for example. Note also that none of those changes would have any meaning for a sysadmin, so even if they stared at those notes for a long time (if that would have been 2.0.0) they would not be able to decide whether an upgrade was okay or not.

Note also that SciPy de facto uses semantic versioning, though probably not intentionally, because it only just reached 1.0.

SciPy uses the exact same versioning scheme and deprecation/removal policy as NumPy. Being at 0.x for a long time does not imply semver.

the one which is (as I see it) the dominant one in scientific computing

Traditional comparisons of the SciPy ecosystem are with things like Matlab and R. Can't find any info about R, but it's at 3.x and has evolved a lot, so probably not semver. Matlab: definitely not semver.

RE: fancy indexing. Indeed, this could use a dedicated function. This is what was done in TensorFlow, for example, with tf.gather, tf.gather_nd, tf.scatter_nd, tf.boolean_mask, etc. The result is a little more verbose than overloading [], but certainly more transparent.

Another feature that can help are type annotations, a feature that was partially motivated by the difficulty of the Python 2 to 3 transition.

I'm not saying this would be easy. In my mind, the community consequences are a bigger deal. This would indeed take a lot of energy to implement and then push downstream into projects like SciPy.

@khinsen I've been following the discussion all week and I think I have a practical test problem to test your take on it. This might be a good item to see how your perspective would handle such conflicts instead of the slightly-abstract discussion so far.

Currently, thanks to Apple Accelerate framework the minimum required LAPACK version is 3.1.ish which from more than a decade ago. And currently LAPACK is at 3.8.0. In the meantime they have discarded quite a number of routines (deprecated and/or removed) and fixed a lot of bugs and most importantly introduced new routines that are needed to fill the gap between commercial software and Python scientific software. The end result is summarized here. I have been constantly annoying mainly @rgommers and others for the last 6 months for this 😃 and I can assure you if they were the kind of people that you, maybe unwillingly, portrayed here this should have happened by now and broke the code of many people. Instead they have been patiently explaining why it is not that easy to drop the support for Accelerate.

Now there is an undisputed need for newer versions. That is not the discussion and we can safely skip that part. There is a significant portion of users of NumPy and SciPy that would benefit from this. But we can't just simply drop it because of arguments that you have already presented. How would you resolve this?

I'm not asking this in a snarky fashion but since all the devs seemingly think alike (and I have to say I agree with them) maybe your look can give a fresh idea. Should we keep Accelerate and create a new NumPy/SciPy package everytime such thing happens? If we drop the support in order to innovate what is the best way you think to go here?

Currently, thanks to Apple Accelerate framework the minimum required LAPACK version is 3.1.ish

@mhvk, this might be a problem for #9976 in 1,14, which I think needs 3.2.2 (edit: let's move discussion there)

@xoviat: Let's have this discussion on that issue

@ilayn Thanks for nudging this discussion towards the concrete and constructive! There are in fact many similarities between that situation and the ones that motivated me to start this thread.

The main common point: there are different users/communities that have different needs. Some want Accelerate, others want the new LAPACK features. Both have good reasons for their specific priorities. There may even be people who want both Accelerate and the new LAPACK features, though this isn't clear to me.

In the Fortran/C world, there is no such problem because the software stacks are shallower. There's Fortran, LAPACK, and the application code, without additional intermediates. What happens is that each application code chooses a particular version of LAPACK depending on its priorities. Computing centres typically keep several LAPACK versions in parallel, each in its own directory, the choice being made by modifying the application code's Makefile.

The lesson that we can and should take over into the SciPy ecosystem is that choosing software versions is not the task of software developers, but of the people who assemble application-specific software bundles. In our world, that's the people who work on Anaconda, Debian, and other software distributions, but also systems managers at various levels and end users with the right competence and motivation.

So my proposal for the SciPy/LAPACK dilemma is to keep today's SciPy using Accelerate, but put it into minimal maintenance mode (possibly taken over by different people). People who want Accelerate can then choose "SciPy 2017" and be happy. They won't get the new LAPACK features, but presumably that's fine with most of them. Development continues in a new namespace (scipy2, scipy2018 or whatever else), which switches to modern LAPACK. If technically possible, allow parallel installation of these two (and future) variants (which I think should be possible for SciPy). Otherwise, people needing both will have to use multiple environments (conda, venv, or system-wide environments via Nix or Guix). Note that even in this second scenario, I strongly recommend changing the namespace with each incompatible change, to make sure that readers of Python code at any level understand for which SciPy version the code was written.

The overall idea is that developers propose new stuff (and concentrate on its development), but don't advertise it as "better" in a general sense, nor as a universal replacement. Choosing the right combination of software versions for a particular task is not their job, it's somebody else's.

The general idea that development and assembly are done independently and by different people also suggests that today's mega-packages should be broken up into smaller units that can progress at different rates. There is no reason today for NumPy containing a small LAPACK interface and tools like f2py. For SciPy, it may make sense to have a common namespace indicating coherence and a common developement policy, but the sub-packages could well be distributed independently. The mega-package approach goes back to Python's "batteries included" motto which was great 20 years ago. Today's user base is too diverse for that, and software packaging has generally been recognized as a distinct activity. Including the batteries should now be Anaconda's job.

The main obstacle to adopting such an approach is traditional Linux distributions such as Debian or Fedora with their "one Python installation per machine" approach. I think they could switch to multiple system-wide virtual environments with reasonable effort, but I haven't thought much about this. For me, the future of software packaging is environment-based systems such as conda or Guix.

I don't see how all the prepositions you have put forth so far, are compatible with any of these steps

  • You have just recreated the madness of the following picture
    image
    Just counted and I have 27 copies now on my Windows machine. Now multiply that by 10 since (releases are more often here) and by 2 (since NumPy and SciPy release cycles are independent). In year 2025 I'll easily have 15 copies of each library and 10 LAPACKs and 5 f2pys as dependencies. Let alone the maintenance burden over only a bunch of dozen people in both packages, this simply won't work. (C++ is not relevant, insert any standard lib of anything). Ask any commercial code developer for Win and tell them this is such a good idea. I'm not responsible for about what follows in that exchange.
  • Then you increased the granularity of the packages and now all doing things on their own with different package versions; f2py broke something in one version so SciPy stops building in the next but still depends on the earlier version of NumPy. So some holistic entity should bring them together for free.
  • Then also you made Anaconda (or some other entity) a company of a major dependence just like Accelerate was. Or simply there will be an abundance of "somebody else"s.
  • Then you mobilized most of the userbase into a workflow that they really don't want (myself included) involving virtual envs.
  • Then you even modified Linux operating systems in passing (which is ... I mean just read some mailing lists of theirs, it's fun).

Maybe you digressed a bit.

(This has become a free-for-all discussion, so I'll go ahead and jump in).

The problem with keeping support for accelerate is not that it lacks newer LAPACK APIs. If that were the problem, we could ship newer LAPACK shims and be done. The problem is that there are basic functions that return incorrect results in certain scenarios. There is no way to work around that other than to write our own BLAS functions. And if we're doing that, we might as well require OpenBLAS or MKL.

@xoviat These have been all discussed in https://github.com/scipy/scipy/pull/6051. It's as usual never that simple. But the point is not to discuss Accelerate drop but use it as a use case for the actual dev cycle for new versions.

@ilayn Yes, I'm sure already know about the points that I'm making. But the comment was for @khinsen; I think he's under the impression that we can actually keep Accelerate support.

One could argue that a feature (or limitation) of the Python ecosystem is that you get one version of a library without the horrible hack of name mangling. This happens in core Python. This is why there are libraries named _lib_ and _lib_2 which have the same purpose but API differences. Even ore Python works this way. It isn't possibly to mix standard libraries across versions even if both are technically usable on the modern Python without someone ripping it and putting it on PyPi. There are plenty of StackOverflow questions on this, all with the same conclusion.

@ilayn If for some reason you want to have all possible combinations of all versions of everything on your machine, yes, that's a mess. But why would you want that? If you limit yourself to the combinations you actually need for your application scenarios, I bet it's going to be less. As an example, I keep exactly two Python environments on my machine: one with Python 2 + NumPy 1.8.2 for running my 20-year-old code, and one representing the state of the art of about two years ago for everything else (two years ago because I set it up two years ago, and never saw a reason to upgrade after that).

As for granularity, I was perhaps not quite clear in my proposition. What I advocate is more granularity in packaging, not in development. I would expect development of, say, f2py and SciPy to continue in close coordination. f2py-2018 and SciPy-2018 should work together. That doesn't mean they have to be packed as a single entity. The goal is to provide more freedom for software distribution managers to do their work.

I definitely don't want to make Anaconda or any other distribution a dependency. It's more like the "abundance of somebody else's", although I don't expect the number of distributions to grow to "abundance", given that assembling them is a lot of work.

I have no idea what workflow "the user base" wants. I see lots of different user bases with different requirements. Personally I'd go for multiple environments, but if there is a significant user base that wants a single environment per machine, some distribution will take care of that. But virtual environments were invented for a reason, they solve a real problem. System-level distributions like Nix or Guix take them to another level. I don't expect them to go away.

BTW, I am actually following the mailing list of one Linux distribution (Guix). Not much fun, but a lot of down-to-earth grunt work. I am happy there are people doing this.

@xoviat I didn't suggest to "keep Accelerate support". I merely suggest to keep a SciPy variant (pretty much the current one) around not as an outdated release for the museum, but as a variant of interest for a particular user group: those for whom using Accelerate is more important than solving the problems that Accelerate creates for others. The "Accelerate first" people will have to live with the consequences of their choice. Some problems will never be fixed for them. That's probably fine with them ("known bugs are better than unknown bugs"), so why force them into something different?

It's really all about labelling and communication. I want to get away from the idealized image of software following a linear path of progress, with newer versions being "better" as indicated by "higher" version numbers. I want to replace this image with one that I consider more realistic: there is no obvious order relation between software releases. Those produced by a long-lived coherent developer community have a temporal order, but that doesn't imply anything about quality or suitability for any given application.

If the idealized image were right, we wouldn't see forks, and we wouldn't have virtual environments. Nor projects such as VersionClimber.

What I am proposing is that software developers should embrace this reality rather than denying it. They should develop (and, most of all, package and label) their products for a world of diversity.

@khinsen If you're okay with incorrect results from linear algebra functions, then we can keep accelerate support (note to others: I know how to do this). However, the main problem is that you might be the only person who wants this. And even if you are not, what happens when someone down the road blames SciPy for a problem with accelerate? What happens when someone wants to have their cake and eat it too? I can just see that happening.

@xoviat No, I am not OK with incorrect results from linear algebra functions. But I am sure that there are plenty of SciPy users who don't need the affected functions at all. In the thread you referred to, someone suggested removing/deactivating the affected functions when Accelerate is detected, which I think is a good solution (note: I cannot judge the effort required to implement this).

In a way this is part of the mega-package problem. With a more granular distribution, it would be easier to pick the stuff that works, both at the development and the distribution assembly level. One could even imagine a distribution assembler composing a domain- and platform-specific SciPy distribution in which different subpackages use different LAPACK versions, e.g. for use in HPC contexts.

But I am sure that there are plenty of SciPy users who don't need the affected functions at all.

There's minimal evidence for this statement and I would in fact bet on the opposite. The functions are widely used but only fail in certain scenarios; in other other words your results are probably correct but may not be. Yes, this probably applies to the SciPy that you currently have installed if using OSX. Yes, this needs to be fixed.

As far as maintaining a separate branch, I don't think that anyone would be opposed to giving you write access to a particular branch for you to maintain. But this is open source software and people work on what they want to; I am skeptical that many people would be interested in maintaining that branch.

Actually, I think the anaconda SciPy is compiled with MKL, so you wouldn't be affected in that case. But then why would you care about accelerate support?

@xoviat It seems there's a big misunderstanding here. I have no personal stakes at all in this specific issue. I don't use any linear algebra routines from SciPy.

You pointed to a thread on a SciPy issue and asked how I would handle that kind of situation. The thread clearly shows reluctance to simply drop Accelerate support, from which I deduced that there is a significant user group that would be affected by such a change. If that user group doesn't exist, then where is the problem? Why hasn't SciPy already dropped Accelerate support?

@xoviat Maintaining a separate branch is easy for anyone. There is no need for it to be hosted in the same GitHub repository. In other words, branches are not the issue. The issue is namespacing, in order to make the parallel existence of separate SciPy versions transparent to users (and distribution assemblers).

Today, when you see code saying "import scipy", you have no idea for which range of SciPy versions it is supposed to work (i.e. has been tested to some degree). In the best case, there is a README saying "SciPy >= 0.8" or something like that. This habit is based on the assumption that "higher" versions are always "better" and never degrade (break, slow down, ...) anything. And that assumption is quite simply wrong.

If, on the other hand, the code says "import scipy2017 as scipy", then it is clear to every reader that using it with earlier or later versions might lead to bad surprises. And if old SciPy versions disappear (effectively, for lack of maintenance), then such a code will fail with an error message, rather than continuing to work unreliably.

This is the one point I am trying to make in this thread. The coexistence of different versions is a reality. The idea that higher is better is a dream. Let's be realistic and organize ourselves for the real world, by acknowledging a multiple-version universe and adjusting everybody's communication to prevent misunderstandings.

Well, dunno… in my opinion when it comes to warnings, a specific version import is not a warning, it is prohibitive of using a different version, since the users having problems as you describe will not dare change your code. A warning would be if you print a warning on install/runtime that it is untested for all but specific numpy versions?

I suppose creating that type of extra packages is possible. I also expect it will just create a different type of hell. Much might survive, but type checking for example will not and cannot when you mix two versions, so basically you won't know if it can or cannot work until you try (and no one will test this!).
And unless you are suggesting allowing to mix two versions, I think your scipy2017 solution will just make things worse. Seems more like we would need something like dynamic/runtime virtual env choosing (like pin_import_version("1.6<numpy<1.10", level="raise") before any import on the python level).

The specific import makes sense if you have major prohibitive changes (a bit like py2/py3), and we already saw that we have different opinions on where or on what time scale that "major" line seems to be.

The backward compatiblity NEP #11596 has been submitted, can we close this?

The backward compatiblity NEP #11596 has been submitted, can we close this?

Yes we can close this. Independent of that NEP (which explicitly mentions semver as a rejected alternative), the consensus of the core devs here is that we don't want to change to semver. Hence closing as wontfix.

Thanks for the discussion everyone.

Was this page helpful?
0 / 5 - 0 ratings