cov() uses a bias of 1 by default.
var() uses a bias of 0 by default.
Such that
import numpy as np
x = np.random.rand(100)
if np.isclose(np.cov([x,x])[0,0], np.var(x)):
print("Consistent by default.")
if np.isclose(np.cov([x,x],ddof=0)[0,0], np.var(x,ddof=0))
print("Consistent.")
will only print the second line.
Yes. I don't know what we can do about that at this point. This issue is probably best discussed on the mailing list.
I'd like to suggest that this is worth fixing, painful though it would be.
I think the best default for both is ddof=0
, which computes a summary statistic that describes the data, as contrasted with an estimator.
If someone wants an unbiased estimator instead, they should ask for it by providing ddof=1
.
That means changing the behavior of cov
, which I guess is used less than var
, so that's good.
As a first step, how about adding a future warning to cov
if neither bias
nor ddof
is provided?
At the same time, can I suggest deprecating bias
? The concept of bias is only applicable if we are thinking of variance and covariance as estimators. But again, I think the best default is to think of these functions as descriptive statistics unless we're told otherwise.
Saying the same thing a different way, if what I want is a simple descriptive statistic, it's strange to ask for a biased estimator.
... of
cov
, which I guess is used less thanvar
, so that's good.
That seems to be true (summary usage data from here):
def cov(
m: object,
y: object = ...,
rowvar: Union[float, bool, int] = ...,
bias: Union[float, int, bool] = ...,
aweights: numpy.ndarray = ...,
):
"""
usage.dask: 11
usage.matplotlib: 3
usage.pandas: 7
usage.scipy: 21
usage.sklearn: 24
"""
def var(
a: object,
axis: Union[int, None, Tuple[Union[int, None], ...]] = ...,
out: Union[dask.dataframe.core.Scalar, dask.dataframe.core.Series] = ...,
keepdims: bool = ...,
dtype: Union[Literal["i8", "f8"], Type[float], None] = ...,
ddof: int = ...,
):
"""
usage.dask: 59
usage.pandas: 13
usage.scipy: 19
usage.sklearn: 55
usage.xarray: 31
"""
That means changing the behavior of
cov
Unfortunately I don't think we can do that. We could deprecate the function and add a new one (which would be painful), but we should definitely not change the behaviour of the current function - that would silently change numerical results and make currently valid code wrong. We try never to do that; a FutureWarning
isn't enough to guarantee that people see the issue.
@rgommers Yeah, "never break valid code" is a pretty good rule.
How about a warning if you call cov
without bias
or ddof
, and then never change the behavior?
How about a warning if you call
cov
withoutbias
orddof
, and then never change the behavior?
That does seem like a reasonable thing to do. Better than deprecating cov
. Having to add ddof=0/1
is a slight annoyance but makes the code more understandable, so I like the idea.
Most helpful comment
@rgommers Yeah, "never break valid code" is a pretty good rule.
How about a warning if you call
cov
withoutbias
orddof
, and then never change the behavior?