Typescript: Suggestion: non-nullable type

Created on 22 Jul 2014  ·  358Comments  ·  Source: microsoft/TypeScript

Introduce two new syntax for type declaration based on JSDoc

var myString: !string = 'hello world'; //non-nullable
var myString1: ?string = 'hello world'; // nullable 
var myString2: string = 'hello world'; // nullable 
var myString3 = 'hello world'; // nullable

by default type are nullable.

Two new compiler flag :

  • inferNonNullableType make the compiler infer non-nullable type :
var myString3 = 'hello world' // typeof myString is '!string', non-nullable
  • nonNullableTypeByDefault (I guess there might be a better name) :
var myString: !string = 'hello world'; // non-nullable
var myString1: string = 'hello world'; // non-nullable 
var myString2: ?string = 'hello world'; // nullable 
var myString3 = 'hello world' // non-nullable
Committed Fixed Suggestion

Most helpful comment

Sorry if I comment on a closed issue but I don't know a better place where to ask and I don't think this is worth a new issue if there's no interest.
Would it be feasible to handle implicit null on a per-file basis?
Like, handle a bunch of td files with noImplicitNull (because they come from definitelytyped and were conceived that way) but handle my source as implicitNull?
Would anybody find this useful?

All 358 comments

I suggest using a type other than string as an example since it by nature is nullable. :P
I can perceive non-nullable types being problematic since the user and compiler of "!" expects the type to always be non-null, which can never be truly asserted in JavaScript. A user might define something like this:

function(myNonNull:!myClass):void {
  myNonNull.foo();
}

And because it's defined as non-null, everything might be happy for the compiler, but then someone else who uses it in javascript passes something null and kaboom.

Now that said, maybe the solution could be that for public facing methods, it could automatically assert not null. But then the compiler could also assert that you cannot have public properties (or private for that matter really) that can have a !nonnull declaration since they could not be enforced.

This may go deeper into the discussion of code contracts for this to be properly enforced.

Forgive my critics, I think there is very little need in non-nullable types if/as-soon-as algebraic data types are here. The reason people use null to represent a missing value is because there is no better way to do that in JavaScript and in most OOP languages alike. So imaging ADTs are already here. Then, as for the old libs written before non-nullables, having them won't make life any better. As for the new libs, with ADT's in place one can very accurately model what a value can take according to the business domain specification without using nulls at all. So I guess what I am saying is that ADT is way more powerful tool to address the same problem.

Personally, I just wrote a little Maybe<T> interface and use discipline to ensure that variables of that type are never null.

I suggest using a type other than string as an example since it by nature is nullable. :P
I can perceive non-nullable types being problematic since the user and compiler of "!" expects the type to always be non-null, which can never be truly asserted in JavaScript. A user might define something like this:

function(myNonNull:!myClass):void {
myNonNull.foo();
}
And because it's defined as non-null, everything might be happy for the compiler, but then someone else who uses it in javascript passes something null and kaboom.

I don't really understand you can also define :

function myFunc(str: string): int {
 return str && str.length;
}

and if someone pass an int to that function it will ends up with an error also, an advantage of typescript is to delegate to the compiler pass things that you would check manually in javascript, having another check for nullable/non-nullable type seems reasonable for me. By the way SaferTypeScript and ClosureCompiler already do that sort of check.

With union types, we could have a pretty simpler specification for that.
Let's say we have now a basic type 'null', we can have a 'stricter' mode where 'null' and 'undefined' is not compatible with any type, so if we want to express a nullable value we would do :

var myNullableString: (null | string);
var myString = "hello";
myNullableString = myString //valid
myString = myNullableString // error null is not assignable to string;

With the 'strict mode' activated typescript should check that every variable non nullable is initialized, also by default optional parameter are nullable.

var myString: string; // error
var myNullableString: (null | string); // no error

function myString(param1: string, param2?: string) {
  // param1 is string
  // param2 is (null | string)
}

@fdecampredon +1

IIRC from what Facebook showed of Flow which is using TypeScript syntax but with non-nullable types by default they support a shorthand for (null | T) as in your original post - I think it was ?T or T?.

var myString: string; // error

That could potentially be quite annoying in the case where you want to initialize a variable conditionally, eg.:

var myString: string;
if (x) {
myString = a;
} else if (y) {
myString = b;
} else {
myString = c;
}

In Rust for example, this is fine as long as the compiler can see that myString will get initialized before it is used but TypeScript's inference doesn't support this at the moment.

Honestly doing something like var myString = '' instead of var myString: string does not bother me so much, but sure that kind of rule is always possible.

@fdecampredon +1 for this - I like the idea very much. For code bases that are 100% JavaScript this would be a useful compile-time only constraint. (As I understand your proposal there's no intention for generated code to enforce this?)

As for shorthand for (null | string) sure ?string is fine.
And sure @johnnyreilly it's only a compile time check

Sum types make non-nullable types by default a very interesting possibility. The safety properties of non-nullable by default can't be overstated. Sum types plus the planned "if/typeof destructuring" (not sure what this should be called) even make it type safe to integrate nullable and non-nullable APIs.

However, making types non-nullable by default is a huge breaking change, which would require changing almost every existing third-party type definition file. While I am 100% for the breaking change, no one person is able to update the type definitions that are out there in the wild.

It's good that a great consensus of these definitions are collected in the DefinitelyTyped repo, but I still have practical concerns about this feature.

@samwgoldman the idea is to have non-nullable types only under a special compiler flag like nonImplicitAny this flag could be named strict or nonNullableType. So there would be no breaking changes.

@fdecampredon What about the type definitions for non-TypeScript libraries, like those at DefinitelyTyped? Those definitions are not checked by the compiler, so any 3rd party code that could return null would need to be re-annotated in order to work correctly.

I can imagine a type definition for a function that is currently annotated as "returns string," but sometimes returns null. If I depended on that function in my nonNullableType'ed code, the compiler doesn't complain (how could it?) and my code is no longer null-safe.

Unless I'm missing something, I don't think this is functionality that can be turned on and off with a flag. It seems to me that this is an all-or-nothing semantic change to ensure interoperability. I would be happy to be proven wrong, though, because I think a flag-switched feature is more likely to happen.

As an aside, there isn't much information available yet on Facebook's Flow compiler, but from the video recording of the presentation, it seems like they went with non-nullable by default. If so, at least there is some precedence here.

Ok let's assume there is a shorthand ? type for type | null | undefined.

@fdecampredon What about the type definitions for non-TypeScript libraries, like those at DefinitelyTyped? Those definitions are not checked by the compiler, so any 3rd party code that could return null would need to be re-annotated in order to work correctly.

I can imagine a type definition for a function that is currently annotated as "returns string," but sometimes returns null. If I depended on that function in my nonNullableType'ed code, the compiler doesn't complain (how could it?) and my code is no longer null-safe.

I don't see the problem, sure some definition files won't be valid with the nonNullableType mode, but most of the time good library avoid to return null or undefined so the definition will still be correct with majority of the cases.
Anyway I personally rarely can pick a DefinitelyTyped definition without having to check/modify it you'll just have a little bit of extra work to add a ? prefix with some definitions.

Unless I'm missing something, I don't think this is functionality that can be turned on and off with a flag. It seems to me that this is an all-or-nothing semantic change to ensure interoperability. I would be happy to be proven wrong, though, because I think a flag-switched feature is more likely to happen.

I don't see why we could not have a flag switched feature, the rules would be simple :

  • in normal mode ? string is equivalent to string and null or undefined are assignable to all the types
  • in nonNullableType mode ? string is equivalent to string | null | undefined and null or undefined are not assignable to any other type than null or undefined

Where is the incompatibility with a flag-switched feature ?

Flags that change the semantics of a language are a dangerous thing. One problem is that the effects are potentially very non-local:

function fn(x: string): number;
function fn(x: number|null): string;

function foo() {
    return fn(null);
}

var x = foo(); // x: number or x: string?

It's important that someone looking at a piece of code can "follow along" with the type system and understand the inferences that are being made. If we starting having a bunch of flags that change the rules of the language, this becomes impossible.

The only safe sort of thing to do is to keep the semantics of assignability the same and change what's an error vs what isn't depending on a flag, much like how noImplicitAny works today.

I know it would break retro-compatibility, an I understand @RyanCavanaugh point of view, but after tasting that with flowtype it is honestly really an invaluable feature, I hope it will ends up being a part of typescript

In addition to RyanCavanaugh's comment --> From what I read somewhere, the ES7 specification / proposal mention the use of function overloading (Same function name but different input parameter datatype). That is a very sorely needed feature for Javascript.

From the flow docs:

Flow considers null to be a distinct value that is not part of any other type

var o = null;
print(o.x); // Error: Property cannot be accessed on possibly null value

Any type T can be made to include null (and the related value undefined) by writing ?T

var o: ?string = null;
print(o.length); // Error: Property cannot be accessed on possibly null or undefined value

[Flow] understands the effects of some dynamic type tests

(i.e. in TS lingo understands type guards)

var o: ?string = null;
if (o == null) {
  o = 'hello';
}
print(o.length); // Okay, because of the null check

Limitations

  • Checks on object properties are limited because of the possibility of aliasing:

In addition to being able to adjust types of local variables, Flow can sometimes also adjust types of object properties, especially when there are no intermediate operations between a check and a use. In general, though, aliasing of objects limits the scope of this form of reasoning, since a check on an object property may be invalidated by a write to that property through an alias, and it is difficult for a static analysis to track aliases precisely

  • Type guard-style checks can be redundant for object properties.

[D]on't expect a nullable field to be recognized as non-null in some method because a null check is performed in some other method in your code, even when it is clear to you that the null check is sufficient for safety at run time (say, because you know that calls to the former method always follow calls to the latter method).

  • undefined is not checked.

Undefined values, just like null, can cause issues too. Unfortunately, undefined values are ubiquitous in JavaScript and it is hard to avoid them without severely affecting the usability of the language. For example, arrays can have holes for elements; object properties can be dynamically added and removed. Flow makes a tradeoff in this case: it detects undefined local variables and return values, but ignores the possibility of undefined resulting from object property and array element accesses

What if the option is added at the same time when introducing the null type (and the questionmark shorthand)? The presence of a null type in a file would force the compiler into non-nullable mode for that file even if the flag is not present at the command line. Or is that a bit too magical?

@jbondc seems good. however the problem with that is that it will ends up with ! everywhere :p

It's tempting to want to change JavaScript but the reality is a 'string' is nullable or can be undefined.

What does this mean? There are no static types in js. So, yes, strings are "nullable", but let's not forget that they are also numberable and objectable and fooable, etc. Any value can have any type.

So when layering a static type system on top of javascript, choosing whether static types are nullable or not is just a design decision. It seems to me non-nullable types are a better default, because it's usually only in special cases that you want a function signature, for instance, to accept a null value in addition to the type specified.

Directives like "use strict" that cause scoped changes to semantics are already a part of the language; I think it would be reasonable to have a "use nonnullable types" directive in TypeScript.

@metaweta I don't think it's enough, for example what happens if a _non null module_ consume a nullable one :

//module A
export function getData(): string[] {
  return null;
}
//module B
'use nonnull'
import A = require('./A');

var data: string[] = A.getData();

data in module B is in fact nullable, but since 'use nonnull' was not used in module A should we report an error ?
I don't see a way to solve that problem with directive based feature.

Yes,

var data: string[] = A.getData();

would cause an error. Instead, you'd have to provide a default value for when getData() returns null:

var data: string[] = A.getData() || [];

@metaweta ok but how do you know that it's an error ? :)
type of getData is still '() => string[]' would you automaticly treat everything that comes from a 'nullable module' as 'nullable ' ?

Yes, exactly (unless a type from the nullable module is explicitly marked otherwise).

That sounds like you now want a per file flag that dictates whether of not that file defaults to nullable or not.

Personally I think it's a bit late to introduce this change, and @RyanCavanaugh is right, the change would make Typescript less predictable as you would not be able to determine what was going on just by looking at a file.

Do projects start with this compiler flag on or off by default? If someone is working on a no default nullable project and create / switch to a default nullable one will that cause confusion?
I currently work with No Implicit Any in most of my projects, and whenever I come across a project that doesn't have that option turned on it takes me by surprise.

The no impicit any is good, but in terms of flags that change the way the language behaves, I think that should be the line. Any more than that and people who are working on multiple projects started by different people with different rules are going to lose a lot of productivity due to false assumptions and slip ups.

@RyanCavanaugh was concerned about non-locality, and directives are lexically scoped. You can't get any more local unless you annotate each site. I'm not particularly in favor of the directive; I was just pointing out that the option exists and that it's at least as reasonable as "use strict" in ES5. I'm personally in favor of non-nullable types by default, but practically, it's too late for that. Given those constraints, I'm in favor of using ! somehow. @jbondc 's proposal lets you distinguish null from undefined; given that Java backends continue to make people use both values, it seems the most useful to me.

I'm sorry if I wasn't clear, I was both agreeing with Ryan and adding my own concerns.

Honestly if adding use not-null is the price for avoiding all the null pointer exception I would pay it without any problem, considering null or undefined as assignable to any type is the worse error that typescript made in my opinion.

@jbondc I have not used 'use strict' and am therefore making some assumptions, please correct me if my assumptions are wrong:

Not null does not affect the syntax that the programmer writes, but the capabilities of the next programmer that tries to use that code (assuming that creator and user are separate people).

So the code:

function myfoo (mynumber: number) {
    return !!mynumber;
} 

(typing on a phone so may be wrong)
Is valid code in both a normal project and a notnull project. The only way that the coder would know whether or not the code is working is if they look at the command line arguments.

At work we have a testing project (which includes prototyping new features) and a main project (with our actual code). When the prototypes are ready to be moved from one project to another (typically with large refractors), there would be no errors in the code, but errors in the use of the code. This behaviour is different to no implicit any and use strict which both would error immediately.

Now I have a fair amount of sway in these projects, so I can warn the people in charge to not use this new 'feature' because it wouldn't save time, but I don't have that capacity over all of the projects at work.
If we want to enable this feature in even one project, then we have to enable it in all of our other projects because we have a very significant amount of code sharing and code migration between projects, and this 'feature' would cause us a lot of time going back and 'fixing' functions that were already finished.

Right @jbondc. @Griffork: Sorry I didn't catch that misunderstanding; a directive is a literal string expression that appears as the first line of a program production or function production and its effects are scoped to that production.

"use not-null";
// All types in this program production (essentially a single file) are not null

versus

function f(n: number) {
  "use not-null";
  // n is not null and local variables are not null
  function g(s: string) {
    // s is not null because g is defined in the scope of f
    return s.length;
  }
  return n.toFixed(2);
}

function h(n: number) {
  // n may be null
  if (n) { return n.toFixed(3); }
  else { return null; }
}

Non-nullable types are useless. Non-nullable types are useles. They are useless. Useless! You don't realize it, but you don't really need them. There is very little sense in restricting yourself by proclaiming that from now on we are not going to be using NULL's. How would you represent a missing value, for example in a situation when you are trying to find a substring that is not there? Not being able to express a missing value (what NULL does now for now) isn't going to solve your problem. You will trade a harsh world with NULL's everywhere for equally harsh one with no missing values at all. What you really need is called algebraic data types that (among many other cool things) feature the ability to represent a missing value (what you are looking for in the first place and what is represented by NULL in imperative world). I am strongly against adding non-nullables to the language, because it looks like useless syntactic/semantic trash that is a naive and awkward solution to a well-known problem. Please read about Optionals in F# and Maybe in Haskell as well as variants (aka tagged unions, discriminated unions) and pattern matching.

@aleksey-bykov It sounds like you're unaware that JavaScript has two nullish values, undefined and null. The null value in JavaScript is only returned on a non-matching regexp and when serializing a date in JSON. The only reason it's in the language at all was for interaction with Java applets. Variables that have been declared but not initialized are undefined, not null. Missing properties on an object return undefined, not null. If you explicitly want to have undefined be a valid value, then you can test propName in obj. If you want to check whether a property exists on the object itself rather than if it's inherited, use obj.hasOwnProperty(propName). Missing substrings return -1: 'abc'.indexOf('d') === -1.

In Haskell, Maybe is useful precisely because there's no universal subtype. Haskell's bottom type represents non-termination, not a universal subtype. I agree that algebraic data types are needed, but if I want a tree labeled by integers, I want every node to have an integer, not null or undefined. If I want those, I'll use a tree labeled by Maybe int or a zipper.

If we adopt a "use not-null" directive, I'd also like "use not-void" (neither null nor undefined).

If you want to guarantee your own code from nulls just prohibit the null
literals. It's way easier than developing non-nullable types. Undefined is a
little bit more complicated, but if you know what they are coming from then
you know how to avoid them. Bottom in Haskell is invaluable! I wish
JavaScript (TypeScript) had a global super type without a value. I miss it
badly when I need to throw in an expression. I've been using TypeScript
since v 0.8 and never used nulls let alone had a need for them. Just ignore
them like you do with any other useless language feature like with
statement.

@aleksey-bykov If I'm writing a library and want to guarantee that inputs are not null, I have to do runtime tests for it everywhere. I want compile-time tests for it, it's not hard to provide, and both Closure and Flow provide support for non-null/undefined types.

@metaweta, you cannot guarantee yourself from nulls. Before your code is compiled there is a gazillion ways to make your lib cry: pleaseNonNullablesNumbersOnly(<any> null). After compiled to js there are no rules at all. Secondly, why would you care? Say it loud and clear upfront nulls are not supported, you put a null you will get a crash, like a disclaimer, you cannot guarantee yourself from all sort of people out there, but you can outline of your scope of responsibilities. Thirdly, I can hardly think of a major mainstream lib that is bulletproof to whatever user may put as input, yet it is still crazy popular. So is your effort worth troubles?

@aleksey-bykov If my library's clients are also type-checked, then I certainly can guarantee I won't get a null. That's the whole point of TypeScript. By your reasoning, there's no need for types at all: just "say loud and clear" in your documentation what the expected type is.

Off topic, nulls are extremely valuable for us because checking them is faster than checking against undefined.
While we don't use them everywhere, we try to use them where possible to represent uninitialised values and missing numbers.

On topic:
We've never had an issue with nulls 'escaping' into other code, but we have had issues with random undefinedes or NaNs appearing. I believe that careful code management is better than a flag in this scenario.

However, for library typings it would be nice to have the redundant type null so that we can choose to annotate functions that can return null (this should not be enforced by the compiler, but by coding practices).

@metaweta, by my reasoning your clients should not use nulls in their code base, it's not that hard, do a full search for null (case sensitive, whole word) and delete all of them. Too clunky? Add a compiler switch --noNullLiteral for fanciness. Everything else stays intact, same code, no worries, way lighter solution with minimal footprint. Back to my point, suppose your non-nullable types found their way to TS and avialable in 2 different flavors:

  • one can use ! syntax to denote a type that cannot take a null, for example string! cannot take a null
  • noNullsAllowed switch is on

then you get a piece of json from your server over ajax with nulls everywhere, moral: the dynamic nature of javascript cannot be fixed by a type annotation on top of it

@aleksey-bykov By the same token, if I'm expecting an object with a numeric property x and I get {"x":"foo"} from the server, the type system won't be able to prevent it. That's necessarily a runtime error and an inescapable problem when using something other than TypeScript on the server.

If, however, the server is written in TypeScript and running on node, then it can be transpiled in the presence of a .d.ts file for my front end code and the type checking will guarantee that the server will never send JSON with nulls in it or an object whose x property is not a number.

@metaweta, non-nullable types sure would be another type safety measure, I am no questioning that, I am saying that by imposing some very basic discipline (avoiding null literals in your code) you can eliminate 90% of your problems without asking for any help from the compiler. Well, even if you have enough resources to enforce this measure, then you still won't be able to eliminate the rest 10% of the problems. So what is the question after all? I ask: do we really need it that bad? I don't, I learned how to live without nulls successfully (ask me how), I don't remember when I got a null reference exception last time (besides in data from our server). There are way cooler stuff I wish we had. This particular one is so insignificant.

Yes we do need this badly. See The billion dollar mistake by Hoare. The null pointer exception (NPE) is the most common error encountered in typed programming languages that don't discriminate nullable from non-nullable types. Its so common that Java 8 added Optional in a desperate attempt to battle it.

Modelling nullables in the type system is not just a theoretical concern, its a huge improvement. Even if you take great care to avoid nulls in your code, the libraries you use might not and therefore its useful to be able to model their data properly with the type system.

Now that there are unions and type guards in TS, the type system is powerful enough to do this. The question is whether it can be done in a backward-compatible way. Personally I feel that this feature is important enough for TypeScript 2.0 to be backwards-incompatible in this regard.

Implementing this feature properly is likely to point to code that is already broken rather than break existing code: it will simply point to the functions that leak nulls outside them (most likely unintentionally) or classes that don't properly initialize their members (this part is harder as the type system may need to make allowance for member values to be initialized in the constructors).

This is not about _not using_ nulls. Its about properly modelling all the types involved. Infact this feature would allow the use of nulls in a safe way - there would be no reason to avoid them anymore! The end result would be very similar to pattern matching on an algebraic Maybe type (except it would be done with an if check rather than a case expression)

And this isn't just about null literals. null and undefined are structurally the same (afaik there are no functions/operators that work on one but not the other) therefore they could be modelled sufficiently well with a single null type in TS.

@metaweta,

The null value in JavaScript is only returned on a non-matching regexp and when serializing a date in JSON.

Not true at all.

  • Interaction with the DOM produces null:
  console.log(window.document.getElementById('nonExistentElement')); // null
  • As @aleksey-bykov pointed out above, ajax operations can return null. In fact undefined is not a valid JSON value:
 JSON.parse(undefined); // error
 JSON.parse(null); // okay
 JSON.stringify({ "foo" : undefined}); // "{}"
 JSON.stringify({ "foo" : null}); // '{"foo":null}'

NB: We can pretend that undefined is returned via ajax, because accessing a non-existent property will result in undefined - which is why undefined is not serialised.

If, however, the server is written in TypeScript and running on node, then it can be transpiled in the presence of a .d.ts file for my front end code and the type checking will guarantee that the server will never send JSON with nulls in it or an object whose x property is not a number.

This is not entirely correct. Even if the server is written in TypeScipt, one can in no way guarantee nulls from being introduced without checking every single property of every single object obtained from persistent storage.

I kind of agree with @aleksey-bykov on this. While it would be absolutely brilliant if we can have TypeScript alert us at compile time about errors introduced by null and undefined, I fear it will only induce a false sense of confidence and end up catching trivia while the real sources of null go undetected.

Even if the server is written in TypeScipt, one can in no way guarantee nulls from being introduced without checking every single property of every single object obtained from persistent storage.

This is in fact an argument _for_ non-nullable types. If your storage can return null Foo's, then the type of the object retrieved from that storage is Nullable<Foo>, not Foo. If you then have a function that returns that is meant to return Foo, then you _have_ to take responsibility by handling the null (either you cast it because you know better or you check for null).

If you didn't have non-nullable types you would not necessarily think to check for null when returning the stored object.

I fear it will only induce a false sense of confidence and end up catching trivia while the real sources of null go undetected.

What sort of non-trivia do you think non-nullable types will miss?

This is not entirely correct. Even if the server is written in TypeScipt, one can in no way guarantee nulls from being introduced without checking every single property of every single object obtained from persistent storage.

If the persistent storage supports typed data, then there would be no need. But even if that weren't the case, you'd have checks only at the data fetching points and then have a guarantee throughout _all_ of your other code.

I kind of agree with @aleksey-bykov on this. While it would be absolutely brilliant if we can have TypeScript alert us at compile time about errors introduced by null and undefined, I fear it will only induce a false sense of confidence and end up catching trivia while the real sources of null go undetected.

Using nullable types wouldn't be an absolute requirement. If you feel that its unnecessary to model the cases where a method returns null as they're "insignificant", you could just not use a nullable type in that type definition (and get the same unsafety as always). But there is no reason to think that this approach will fail - there are examples of languages that have successfully implemented it already (e.g. Kotlin by JetBrains)

@aleksey-bykov Honestly you got it completely wrong, one of the best thing about non-nullable types is _the possibility to express a type as nullable_.
With your strategy of never using null to prevent null pointer error you completely loose the possibility of using null out of the fear of introducing error, that's completely silly.

Another thing please in a discussion about a language feature don't go with stupid comments like :

Non-nullable types are useless. Non-nullable types are useles. They are useless. Useless! You don't realize it, but you don't really need them.

That just make me feel like I should ignore whatever you will ever post anywhere on the web, we are here to discuss about a feature, I can understand and gladly accept that your point of view is not mine, but don't behave like a kid.

I am not against introducing non-null type annotation at all. It has been shown to be useful in C# and other languages.

The OP has changed the course of the discussion with the following:

Honestly if adding use not-null is the price for avoiding all the null pointer exception I would pay it without any problem, considering null or undefined as assignable to any type is the worse error that typescript made in my opinion.

I was merely pointing out the prevalence of null and undefined in the wild.

I should also add that one of the things that I truly appreciate about TypeScript is the laissez-faire attitude of the language. It has been a breath of fresh air.

Insisting that types are non-nullable by default goes against the grain of that spirit.

I've been seeing a number of arguments floating around as to why we do / don't need this and I want to see if I understand all of the underlying cases that have been discussed such far:

1) want to know whether or not a function can return null (caused by it's execution pattern, not it's typing).
2) want to know if a value can be null.
3) want to know if a data object can contain null values.

Now, there are only two cases in which situation 2 can occur: of you're using nulls or if a function returns a null. If you eliminate all nulls from your code (assuming that you don't want them) then really situation 2 can only occur as a result of situation 1.

Situation 1 I think is best solved by annotating the function's return type to show presence of a null value. _This does not mean that you need a non null type_. You can annotate the function (for example by using union types) and not have non null types, it's just like documentation, but probably clearer in this case.

Solution 2 is also solved by this.

This allows programmers working at their company to use processes and standards to enforce that null types are marked up, and not the Typescript team (exactly the same way that the whole typing system is an opt-in so would the explicit nullable types be an opt in).

As for scenario 3, the contract between the server and the client is not for Typescript to enforce, being able to mark-up the affected values as possibly null might be an improvement, but eventually you'll get the same garentee from that as typescript's tooling gives you on every other value (which is to say, none unless you have good coding standards or practices!

(posting from phone, sorry for errors)

@fdecampredon, it's not the fear in the first place, it's that using null is unnecessary. I don't need them. As a nice bonus I got a problem of null reference exceptions eliminated. How is it all possible? By employing a sum-type with an empty case. Sum-types are a native feature of all FP languages like F#, Scala, Haskell and together with product types called algebraic data types. Standard examples of sum types with an empty case would be Optional from F# and Maybe from Haskell. TypeScript doesn't have ADT's, but instead it has a discussion in progress about adding non-nullables that would model one special case of what ADT's would have covered. So my message is, ditch the non-nullables for ADT's.

@spion, Bad news: F# got nulls (as legacy of .NET). Good news: no one uses them. How can you not use null when it's there? They have Optionals (just like the most recent Java as you mentioned). So you don't need null if you have a better choice at your disposal. This is what I am ultimately suggesting: leave nulls (non-nulls) alone, just forget they exist, and implement ADT's as a language feature.

We use null but not the same way you people use. My company source code comes in 2 parts.

1) When it comes to data (like database data), we replace null with blank data at the time of variable declaration.

2) All others, like programmable objects, we use null so we know when there's a bug and loophole in source code where objects aren't created or assignable that aren't necessary javascript objects. The undefined is javascript object issues where there's a bug or loophole in the source code.

The data we don't want to be nullable cuz it is customer data and they'll see null wordings.

@aleksey-bykov typescript has adt with union type, the only thing missing is pattern matching but that's a feature that just cannot be implemented with the typescript philosophy of generating javascript close to the original source.

On the other end it's impossible to define whith union type the exclusion of null value, that's why we need non-null type.

@fdecampredon, TS doesn't have ADT's, it has unions which are not sum types, because, as you said, 1. they cannot model an empty case correctly since there is no unit type, 2. there is no reliable way to destructure them

pattern matching for ADT's can be implemented in a way that is aligned closely with generated JavaScript, anyway I hope that this argument isn't a turning point

@aleksey-bykov this is not F#. Its a language that aims to model the types of JavaScript. JavaScript libraries use null and undefined values. Therefore, those values should be modeled with the appropriate types. Since not even ES6 supports algebraic data types, it doesn't make sense to use that solution given TypeScript's design goals

Additionally, JavaScript programmers typically use if checks (in conjuction with typeof and equality tests), instead of pattern matching. These can already narrow TypeScript union types. From this point its only a tiny step to support non-nullable types with benefits comparable to algebraic Maybe etc.

I'm surprised that nobody actually mentioned the huge changes that lib.d.ts may need to introduce and potential problems of the transient null state of class fields during constructor invocation. Those are some real, actual potential issues to implement non-nullable types...

@Griffork the idea is to avoid having null checks everywhere in your code. Say you have the following function

declare function getName(personId:number):string|null;

The idea is that you check whether the name is null only once, and execute all the rest of the code free from worries that you have to add null checks.

function doSomethingWithPersonsName(personId:number) {
  var name = getName(personId);
  if (name != null) return doThingsWith(name); // type guard narrows string|null to just string
  else { return handleNullCase(); }
}

And now you're great! The type system guarantees that doThingsWith will be called with a name that is not null

function doThingsWith(name:string) {
  // Lets create some funny versions of the name
  return [uppercasedName(name), fullyLowercased(name), funnyCased(name)]
}

None of these functions need to check for a null, and the code will still work without throwing. And, as soon as you try to pass a nullable string to one of these functions, the type system will tell you immediately that you've made an error:

function justUppercased(personId:number) {
  var name = getName(personId);
  return uppercasedName(name); // error, passing nullable to a function that doesn't check for nulls.
}

This is a huge benefit: now the type system tracks whether functions can handle nullable values or not, and furthermore, it tracks whether they actually need to or not. Much cleaner code, less checks, more safety. And this is not just about strings - with a library like runtime-type-checks you could also build type guards for much more complex data

And if you don't like the tracking because you feel that its not worth modeling the possibility of a null value, you can revert back to the good old unsafe behavior:

declare function getName(personId:number):string;

and in those cases, typescript will only warn you if you do something that is obviously wrong

uppercasedName(null);

I frankly don't see downsides, except for backward-compatibility.

@fdecampredon Union types are just that, unions. They are not _disjoint_ unions a.k.a. sums. See #186.

@aleksey-bykov

Note that adding an option type is still going to be a breaking change.

// lib.d.ts
interface Document {
    getElementById(id: string): Maybe<Element>;
}

...

// Code that worked with 1.3
var myCanvas = <HTMLCanvasElement>document.getElementById("myCanvas");
// ... now throws the error that Maybe<Element> can't be cast to an <HTMLCanvasElement>

After all, you can get a poor man's option types right now with destructuring

class Option<T> {
    hasValue: boolean;
    value: T;
}

var { hasValue, myCanvas: value } = <Option<HTMLCanvasElement>> $("myCanvas");
if (!hasValue) {
    throw new Error("Canvas not found");
}
// Use myCanvas here

but the only value from this is if lib.d.ts (and any other .d.ts, and your whole codebase but we'll assume we can fix that) also use it, otherwise you're back to not knowing whether a function that doesn't use Option can return null or not unless you look at its code.

Note that I also am not in favor of types being non-null by default (not for TS 1.x anyway). It is too big a breaking change.

But let's say we're talking about 2.0. If we're going to have a breaking change anyway (adding option types), why not make types non-nullable by default as well? Making types non-nullable by default and adding option types is not exclusive. The latter can be standalone (eg. in F# as you point out) but the former requires the latter.

@Arnavion, there is some misunderstanding, I didn't say we need to replace the signatures, all existing signatures stay intact, it's for the new developments you are free to go either with ADT or whatever else you want. So no breaking changes. Nothing is being made non-null by default.

If ATDs are here, it's up to a developer to wrap all places where nulls can leak into the application by transforming then into optionals. This can be an idea for a standalone project.

@aleksey-bykov

I didn't say we need to replace the signatures, all existing signatures stay intact

_I_ said that the signatures need to be replaced, and I gave the reason already:

but the only value from this is if lib.d.ts (and any other .d.ts, and your whole codebase but we'll assume we can fix that) also use it, otherwise you're back to not knowing whether a function that doesn't use Option can return null or not unless you look at its code.


Nothing is being made non-null by default. Ever.

For TS 1.x, I agree, only because it is too big a breaking change. For 2.0, using option type in the default signatures (lib.d.ts etc.) would already be a breaking change. Making types non-nullable by default _in addition to that_ becomes worth it and carries no downsides.

I disagree, introducing optionals should not break anything, it's not like we either use optionals or nullables or non-nullables. Everyone uses whatever they want. Old way of doing things should not depend on new features. It's up to a developer to use an appropriate tool for his immediate needs.

So you're saying that if I have a function foo that returns Option<number> and a function bar that returns number, I'm not allowed to be confident that bar cannot return null unless I look at the implementation of bar or maintain documentation "This function never returns null."? Don't you think this punishes functions which will never return null?

function bar from your example was known as nullable from the begging of time, it was used in some 100500 applications all around and everyone treated the result as nullable, now you came around and looked inside of it and discovered that null is impossible, does it mean that you should go ahead and change the signature from nullable to non-nullable? I think you should not. Because this knowledge, although valuable, isn't worth braking 100500 applications. What you should do is to come up with a new lib with revised signatures that does it like this:

old_lib.d.ts

...
declare function bar(): number; // looks like can return a potentially nullable number
...

revised_lib.d.ts

declare function bar(): !number; // now thank to the knowledge we are 100% certain it cannot return null

now the legacy apps keep using the old_lib.d.ts, for new apps a developer is free to choose revised_libs.d.ts

Unfortunately revised_libs.d.ts has 50 other functions that I haven't looked at yet, all of which return number (but I don't know whether it's really number or nullable number). What now?

Well, take your time, ask for help, use versioning (depending on the level of knowledge you've gained so far you may want to release it graduately with ever increasing version number: revised_lib.v-0.1.12.d.ts)

It's not necessary actually. A function that returns nullable type in the signature but non-nullable type in the implementation only results in redundant error checking by the caller. It doesn't compromise safety. Gradually annotating more and more functions with ! as you discover them will work just fine, as you said.

I'm not a fan of ! only because it's more baggage to type (both in terms of keystrokes and needing to remember to use it). If we want non-nullability in 1.x then ! is one of the options already discussed above, but I would still say that having an eventual breaking change with 2.0 and making non-nullability the default is worth it.

On the other hand, maybe it'll lead to a Python 2/3-esque situation, where nobody upgrades to TS 2.0 for years because they can't afford to go through their million-line codebase making sure that every variable declaration and class member and function parameter and... is annotated with ? if it can be null. Even 2to3 (the Python 2 to 3 migration tool) doesn't have to deal with wide-ranging changes like that.

Whether 2.0 can afford to be a breaking change depends on the TS team. I would vote for yes, but then I don't have a million-line codebase that will need fixing for it, so maybe my vote doesn't count.

Perhaps we should ask the Funscript folks how they reconcile DOM API returning nulls with F# (Funscript uses TypeScript's lib.d.ts and other .d.ts for use from F# code). I've never used it, but looking at http://funscript.info/samples/canvas/index.html for example it seems the type provider does not think that document.getElementsByTagName("canvas")[0] can ever be undefined.

Edit: Here it seems document.getElementById() is not expected to return null. At the very least it doesn't seem to be returning Option<Element> seeing it is accessing .onlick on the result.

@spion
Thanks, I hadn't thought of that.

At work our codebase is not small, and this breaking change that people want would set us back a lot of time with very little gain. Through good standards and clear communication between our developers we've not had problems with nulls appearing where they shouldn't.
I am honestly surprised that some people are pushing for it so badly.
Needless to say this would give us very little benefit and would cost us a lot of time.

@Griffork have look at this filtered list of issues in the typescript compiler for an estimate on how big of a benefit this change could make. All of the "crash" bugs listed at that link could be avoided by using non-nullable types. And we're talking about the awesome Microsoft level of standards, communication and code-review here.

Regarding breakage, I think that if you continue using existing type definitions its possible that you wont get any errors at all, except the compiler pointing out potentially uninitialized variables and fields leaking out. On the other hand, you might get a lot of errors, particularly e.g. if class fields are often left uninitialized in constructors in your code (to be initialized later). Therefore I understand your concerns, and I'm not pushing for a backward-incompatible change for TS 1.x. I still hope that I've managed to persuade you that if any change to the language was worthy of breaking backward compatibility, its this one.

In any case, Facebook's Flow does have non-nullable types. Once its more mature, it might be worth investigating as a replacement of TS for those of us who care about this issue.

@spion, the only number your list gives us is how many times null or undefined were mentioned for any reason out there, basically it only says that null and undefined have been talked about, I hope it's clear that you cannot use it as an argument

@aleksey-bykov I am not - I looked at _all_ the issues on that filtered list and every single one that had the word "crash" in it was related to a stack trace which shows that a function attempted to access a property or a method of an undefined or null value.

I tried to narrow the filter with different keywords and I (think I) managed to get all of them

@spion
Question: how many of those errors are caused in locations that they would need to mark the variable as nullable or undefinable though?

E. G. If an object can have a parent, and you initialise parent to null, and you're always going to have one object with a parent that is null, you will still have to declare the parent as possibly null.
The problem here is if a programmer writes some code with the assumption that a loop will always break before it reaches the null parent. That's not a problem with null being in the language, the exact same thing would happen with undefined.

Reasons for keeping null as easy to use as it is now:
• It's a better default value than undefined.
. 1) it's faster to check in some cases (our code must be very performant)
. 2) it makes for in loops work more predictably on objects.
. 3) it makes array usage make more sense when using nulls for blank values (as opposed to missing values). Note that delete array[i] and array[i] = undefined have different behaviour when using indexOf (and probably other popular array methods).

What I feel the result of making nulls require extra mark up to use in the language:
• I got an undefined error instead of a null error (which is what would happen in most of the typescript scenarios).

When I said we don't have a problem with nulls escaping, I meant that variables that are not initialised to null never become null, we still get null exception errors in exactly the same place we would get undefined errors (as does the Typescript team). Making null harder to use (by requiring extra syntax) and leaving undefined the same will actually cause more problems to some developers (e. g. us).

Adding extra syntax to use null means that for several weeks / months developers who use a lot of null will be making errors left, right and center while they try to remember the new syntax. And it will be another way to slip up in the future (by annotating something slightly incorrectly). [Time to point out that I hate the idea of using symbols to represent types, it makes the language less clear]

Until you can explain to me a situation in which null causes an error or problem that undefined would not, then I won't agree with making null significantly harder to use than undefined. It has it's use case, and just because it's not helping you, doesn't mean that the breaking change that you want (that _will_ hurt the workflow of other developers) should go ahead.

Conclusion:
There is no point in being able to declare non-nullable types without being able to define non-undefined types. And non-undefined types are not possible due to the way javascript works.

@Griffork when I say non-nullable, I mean non-nullable-or-undefined. And its not true that its not possible due to the way JS works. With the new type guard features, once you use a type guard you know that the value flowing from there cannot be null or undefined anymore. There is a tradeoff involved there too, and I submit Facebook Flow's implementation as proof that its quite doable.

The _only_ case that will become slightly harder to use is this one: I will temporarily assign null to this variable then I will use it in this other method as if it not null, but I know that I'll never call this other method before initializing the variable first, so there is no need to check for nulls. This is very brittle code, and I would definitely welcome the compiler warning me about it: its a refactor away from being a bug anyway.

@spion
I do believe that I'm finally understanding where you're coming from.

I believe that you're wanting type guards to help determine when a value cannot be null, and allow you to call functions that don't check for null within. And if the guarding if-statement is removed, then that becomes an error.

I can see that being useful.

I also believe that this isn't really going to be the silver bullet you're hoping for.

A compiler that does not run your code and test every facet of it is not going to be better at determining where undefineds/nulls are than the programmer who wrote the code. I am concerned that this change would lull people into a false sense of security, and actually make null/undefined errors more difficult to track when they do occur.
Really, I think the solution that you need is a good set of testing tools that support Typescript, that can reproduce these sort of bugs in Javascript, rather than implementing a type in a compiler that can not deliver on it's promise.

You mention Flow as having a solution to this problem, but when reading your link I saw some concerning things:

"Flow can sometimes also adjust types of object properties, especially when there are no intermediate operations between a check and a use. In general, though, aliasing of objects limits the scope of this form of reasoning, since a check on an object property may be invalidated by a write to that property through an alias, and it is difficult for a static analysis to track aliases precisely."

"Undefined values, just like null, can cause issues too. Unfortunately, undefined values are ubiquitous in JavaScript and it is hard to avoid them without severely affecting the usability of the language[...] Flow makes a tradeoff in this case: it detects undefined local variables and return values, but ignores the possibility of undefined resulting from object property and array element accesses."

Now undefined and null work differently, undefined errors can still show up everywhere, the question mark cannot guarantee that the value is not null, and the language behaves more differently to Javascript (which is what TS is trying to avoid from what I've seen).

p.s.

foo(thing: whatiknowaboutmyobject) {
    if (thing.hidden) {
        delete thing.description;
    }
}

if (typeof thing.description === "string") {
    //thing.description is non-nullable now, right?
    foo(thing);
    //What is thing.description?
    console.log(thing.description.length);
}

TS is already vulnerable to aliasing effects (as is any language that allows mutable values). This will compile without any errors:

function foo(obj: { bar: string|number }) {
    obj.bar = 5;
}

var baz: { bar: string } = { bar: "5" };

foo(baz);

console.log(baz.bar.charAt(0)); // Runtime error - Number doesn't have a charAt method

The Flow docs are stating this only for completeness.

Edit: Better example.

Old:

Yes, but I argue that the means of setting something to undefined is much greater than the means of setting something to another value.

I mean,

mything.mystring = 5; // is clearly wrong.
delete mything.mystring; //is not clearly wrong - this is not quite the equivalent of setting mystring to >undefined.

Edit:
Meh, at this point it's pretty much personal preference. After using javascript for ages, I do not think this suggestion is going to help the language. I think it's going to lull people into a false sense of security, and I think that it will drive Typescript (as a language) away from Javascript.

@Griffork For an example of how the current typescript luls you into a false sense of security, try the example you presented in the playground:

var mything = {mystring: "5"}; 
delete mything.mystring;
console.log(mything.mystring.charAt(1));

By the way, the delete operator could be treated the same way as assigning a value of type null and that would be sufficient to cover your case too.

The claim that the language will behave differently than JavaScript is true, but meaningless. TypeScript already has behavior different than JavaScript. The point of a type system has always been to disallow programs that don't make sense. Modelling non-nullable types simply adds a couple of extra restrictions. Disallowing the assignment of null or undefined values to a variable of non-nullable type is precisely the same as disallowing the assignment of a number to a variable of type string. JS allows both, TS could allow neither

@spion

the idea is to avoid having null checks everywhere in your code

If I understand what you are advocating:

A. Make all types non-null by default.
B. Mark fields and variables that are nullable.
C. Ensure the application/library developer checks all entry points into the application.

But doesn't that mean the onus for ensuring one's code is free of nulls is on the person writing the code, and not on the compiler? We are effectively telling the compiler "dw, I'm not letting any nulls into the system."

The alternative is to say, nulls are everywhere, so don't bother, but if something is non-nullable then I'll let you know.

The fact is the latter approach is prone to null reference exceptions, but it's more truthful. Pretending that a field on an object obtained over the wire (i.e. ajax) is non-null implies having faith in God :smiley: .

I believe there is strong disagreement on this issue because, depending on what one is working on, item C above could either be trivial or infeasible.

@jbondc I'm glad you asked that. Indeed CallExpression is marked as undefined or nullable. However, the type system currently does not take any advantage of that - it still allows all the operations on typeArguments as if it isn't null or undefined.

However, when using the new union types in combination with non-nullable types, the type could be expressed as NodeArray<TypeNode>|null. Then the type system will not allow any operations on that field unless a null check is applied:

if (ce.typeArguments != null) {
  callSomethingOn(ce.typeArguments)
}

// callSomethingOn doesn't need to perform any checks

function callSomethingOn(na:NodeArray<TypeNode>) {
...
}

With the help of TS 1.4 type guards, inside the if block the type of the expression will be narrowed to NodeArray<TypeNode> which in turn will allow all NodeArray operations on that type; additionally, all functions called within that check will be able to specify their argument type to be NodeArray<TypeNode> without performing any more checks, ever.

But if you try to write

function someOtherFunction(ce: CallExpression) {
  callSomethingOn(ce.typeArguments)
}

the compiler will warn you about it at compile time, and the bug simply wouldnt've happened.

So no, @NoelAbrahams, this is not about knowing everything for certain. Its about the compiler helping you tell what value a variable or field can contain, just like with all other types.

Of course, with external values, as always, its up to you to specify what their types are. You can always say that external data contains a string instead of a number, and the compiler wont complain yet your program will crash when trying to do string operations on the number.

But without non-nullable types, you don't even have the ability to say that a value can't be null. The null value is assignable to each and every type and you can't make any restrictions about it. Variables can be left uninitialized and you wont get any warnings since undefined is a valid value for any type. Therefore, the compiler is unable to help you catch null and undefined-related errors at compile time.

I find it surprising that there are so many misconceptions about non-nullable types. They're just types that can't be left uninitialized or can't be assigned the values null or undefined. This isn't that dissimilar to being unable to assign a string to a number. This is my last post on this issue, as I feel that I'm not really getting anywhere. If anyone is interested in finding out more, I recommend starting with the video "The billion dollar mistake" I mentioned above. The issue is well known and tackled by many modern languages and compilers successfully.

@spion, I do entirely agree about all the benefits of being able to state whether a type can or cannot be null. But the question is do you want types to be non-null _by default_?

Pretending that a field on an object obtained over the wire (i.e. ajax) is non-null implies having faith in God

So don't pretend. Mark it as nullable and you'll be forced to test it before using it as non-nullable.

Sure. It boils down to whether we want to mark a field as nullable (in a system where fields are non-null by default) or whether we want to mark a field as non-nullable (in a system where fields are nullable by default).

The argument is that the former is a breaking change (which may or may not be of significance) and also untenable because it requires the application developer to check and guarantee all entry points.

@NoelAbrahams I don't see why it's 'intenable' basically most of the time you don't want null, also when an entry point can return null you will have to check it, in the end a type system with non-null type as default will allows you to make less null check because you will be able to trust some api/library/entry point in your application.

When you think a bit about it marking a type as non null in a nullable type system has limited value, you will still be able to consume nullable typed variable/return type without being forced to test it.
It will also force definition author to write more code, since most of the time well designed library never return null nor undefined value.
Finally even the concept is strange, in a type system with non nullable type, a nullable type is perfectly expressible as an union type: ?string is the equivalent of string | null | undefined. In a type system with nullable type as default where you can mark type as non nullable how would you express !string ? string - null - undefined ?

In the end I don't really understand the concern of people here, null is not a string, in the same way than 5 is not a string, both value won't be able to be used where a string is expected, and letting slip var myString: string = null is as error prone as: var myString: string = 5.
Having null or undefined assignable to any type is perhaps a concept that developer are familiar with, but it is still a bad one.

I don't think I was entirely correct in my previous post: I'll blame it on the hour.

I've just looked through some of our code to see how things would work and it would certainly help to mark certain fields as nullable, for example:

interface Foo {
        name: string;
        address: string|null; /* Nullable */
}

var foo:Foo = new FooClass();
foo.name.toString(); // Okay
foo.address.toString(); // Error: do not use without null check

But what I do object to is the following:

foo.name = undefined; // Error: non-nullable

I feel this will interfere with the natural way of working with JavaScript.

The exact same could apply with number :

interface Foo {
        name: string;
        address: string|number; 
}
var foo:Foo = new FooClass();
foo.name.toString(); // Okay
foo.address.slice() // error

foo.name  = 5 // error

And it's still valid in JavaScript

Reasonably how many time do you willingly assign null to a property of an object ?

I think that most things would be marked as null but you'd be relying on type guards to declare that the field is now non nullable.

@fdecampredon
Quite a lot actually.

@Griffork,

I think that most things would be marked as null

That was my initial thought. But after going through some sections of our code I found comments such as the following:

interface MyType {

     name: string;

     /** The date the entry was updated from Wikipedia or undefined for user-submitted content. */
     wikiDate: Date; /* Nullable */
}

The idea that a field is nullable is often used to provide information. And TypeScript will catch errors if it requires a type guard when accessing wikiDate.

@fdecampredon

foo.name = 5 // error
And it's still valid in JavaScript

True, but that is an error because TypeScript knows with 100% certainty that it was not intentional.

whereas

foo.name = undefined; // Do not send name to server

is perfectly intentional.

I think the implementation that would most closely fit our requirements is to not use union types, but go with the original suggestion:

 wikiDate: ?Date;

I agree with @NoelAbrahams

foo.name = 5 // error
And it's still valid in JavaScript
True, but that is an error because TypeScript knows with 100% certainty that it was not intentional.

The compiler just know that you marked name as string and not string | number if you want a nullable value you would just mark it as ?string or string | null ( which is pretty much equivalent )

I think the implementation that would most closely fit our requirements is to not use union types, but go with the original suggestion:

wikiDate: ?Date;

So we are agree type are non null by default and you would mark nullable with ? :) .
Note that it would be an union type since ?Date would be the equivalent of Date | null | undefined :)

Oh sorry, I was trying to agree to nullable by default and not null with special typing (the symbols are confusing).

@fdecampredon, actually what it means is when a field or variable is marked as nullable then a type guard is required for access:

var wikiDate: ?Date;

wikiDate.toString(); // error
wikiDate && wikiDate.toString(); // okay

This is not a breaking change, because we should still be able to do this:

 var name: string;   // okay
 name.toString();  // if you think that's fine then by all means

Perhaps you believe that we can't have this without introducing null into union types?

Your first example is absolutely right when you do :

wikiDate && wikiDate.toString(); // okay

you use a typeguard and the compiler should not warn anything.

However your second example is not good

var name: string;   // okay
name.toString();  // if you think that's fine then by all means

the compiler should have an error here, a simple algorithm could just error on the first line (unitialized variable not marked as nullable), a more complex one could try to detect assignation before first usage :

var name: string;   // okay
name.toString();  // error because not initialized
var name: string;
if (something) {
  name = "Hello World";
} else {
  name = "Foo bar";
}
name.toString();  // no error since name will always be initialized.

I don't know exactly where to put the barrier but it would sure need some kind of subtile tunning to not get in the way of the developper.

It's a breaking change and cannot be introduced before 2.0, except perhaps with the 'use nonnull' directive proposed by @metaweta

Why not have:

var string1: string; //this works like typescript does currently, doesn't need type-guarding before use, null and undefined can be assigned to it.
string1.length; //works
var string2: !string; //this doesn't work because the string must be assigned to a non-null and non-undefined value, doesn't need type-guarding before use.
var string3: ?string; //this must be type guarded to non-null, non-undefined before use.
var string4: !string|null = null; //This may be null, but should never be undefined (and must be type-guarded before use).
var string5: !string|undefined; //this may never be null, but can be undefined (and must be type-guarded before use).

And have a compiler flag (that only works if -noimplicitany is on) which says -noinferrednulls, which disables the normal syntax for types (like string, and int) and you have to supply a ? or ! with them (null, undefined and any types being exceptions).

In this manner, non-nullables are an opt-in, and you can use the compiler flag to force a project to be explicity nulled.
The compiler flag errors at the assignment of types, not after (like the previous proposal).

Thoughts?

Edit: I wrote this because it forces the idea of using non-null to be explicit in every action. Anyone who reads the code who comes from any other TS project will know exactly what's happening. Also the compiler flag becomes very obvious if it's on (as blah: string is an error, but blah:!string isn't, similar to the way -noimplicitany works).

Edit2:
DefinatelyTyped could then be upgraded to support noninferrednulls, and they won't change the use of the libraries if people choose to not opt-in to the ? and ! feature.

I don't care whether non-null and non-undefined are opt-in, opt-out, with
type modifiers (!?), a directive, or a compiler flag; I'll do whatever it
takes to get them _so long as they are possible to express_, which is not
currently the case.

On Mon, Dec 22, 2014 at 2:35 PM, Griffork [email protected] wrote:

Why not have:

var string1: string; //this works like typescript does currently., doesn't need type-guarding before use, null and undefined can be assigned to it.
string1.length; //worksvar string2: !string; //this doesn't work because the string must be assigned to a non-null and non-undefined value, doesn't need type-guarding before use.var string3: ?string; //this must be type guarded to non-null, non-undefined before use.var string4: !string|null = null; //This may be null, but should never be undefined (and must be type-guarded before use).var string5: !string|undefined; //this may never be null, but can be undefined (and must be type-guarded before use).

And have a compiler flag (that only works if -noimplicitany is on) which
says -noinferrednulls, which disables the normal syntax for types (like
string, and int) and you have to supply a ? or ! with them (null, undefined
and any being exceptions).

In this manner, non-nullables are an opt-in, and you can use the compiler
flag to force a project to be explicity nulled.
The compiler flag errors at the _assignment of types_, not after (like
the previous proposal).

Thoughts?

Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-67899445
.

Mike Stay - [email protected]
http://www.cs.auckland.ac.nz/~mike
http://reperiendi.wordpress.com

@Griffork that would be an option, but a bad one in my opinion, and I'll explain why :

  • It will cause a lot more works in definition files, since now we will have to check and annotate every type to have the right definition.
  • We will ends up writing !string (and sometimes ?string) everywhere on the code which would make the code a lot less readable.
  • !string in a system where type are nullable is a strange concept the only way you can't really describe it is string minus null minus undefined, on the contrary ?string is pretty simple to describe in a type system where type are null by default string | null | undefined.
  • I foresee a lot of headache ( and perf loss ) to find a type-check algorithm where the compiler understand that string | null require a type guard but string doesn't, you basically introduce a concept where some union type should be treated differently than other.
  • And finally the worst part we completely loose type inference var myString = "hello" what myString should be string, ?string or !string ? honestly a big headache in perspective here.

If we don't have non null type as default, the best proposal I have seen here is the 'use non-null' directive proposed by @metaweta.
Sure it needs to be specified nicely but at least with just a use non-null string in all our file we can get a simple an predictable behavior.

@fdecampredon

  1. It may be a lot more work in definition files, but _you'd have to do that work anyway_ (to ensure the types are correct) and this time the compiler would remind you of what you haven't edited yet (if using -noimplicitnull).
  2. I'm open to other suggestions for annotation. I honestly believe that the current type system has it's place, and should not be _replaced_. I don't think a breaking change is worth it. Instead I think we should find a better way of describing what you're after. (I really, really dislike the idea of representing any of this with symbols, they're not intuitive.)
  3. What's hard to describe about it? I've seen discussions elsewhere in typescript where this request (for certain types) has been proposed (without a marker). Did a quick search and I couldn't find the issue, I'll hunt more later.

  4. If you're referring to what I had written as !string|null, that would work in the current system if null was treated _like_ {} (but was not assignable to it).
    If you're talking about string|null which I didn't have in my list, then I think null should be ignored in this case. Null and undefined only make sense in unions where every non-null and non-undefined is preceeded with a ! and any isn't an any (this could be a compiler warning/error).
  5. Good question, and one that only arises if you're using the -noimplicitnull option, I think the safest option would be to assign it to which ever option is the most likely to cause an early error (probably nullable), but I get the feeling there's a better idea that I'm not thinking of. I wonder if someone else has a suggestion on how this should be approached?

Edit: Added to point 2.
Edit: Fixed typo in point 4.

It may be a lot more work in definition files, but you'd have to do that work anyway (to ensure the types are correct) and this time the compiler would remind you of what you haven't edited yet (if using -noimplicitnull).

Yes and no, check the library that you use and see how much actually returns null or undefined.
it's a pretty rare case, we could only find in this issue very few occurrence for the standard lib and for example Promise library never do it.
My point is that in a type system where type are not nullable by default most of the existing definition files are already valid.

I'm open to other suggestions for annotation. I honestly believe that the current type system has it's place, and should not be replaced. I don't think a breaking change is worth it. Instead I think we should find a better way of describing what you're after. (I really, really dislike the idea of representing any of this with symbols, they're not intuitive.)

I don't think there is such a way but I hope I am wrong, because it has an inestimable value in my opinion.
For the breaking change part why are you so against the 'use non-null' directive ?
Developers who wish a nullable type system would not be impacted at all (unless they already strangely add 'use non-null' at the top of their file but honestly that would be a bit ... weird)
And developers who wish a non-null type system could just use the new directive.

What's hard to describe about it? I've seen discussions elsewhere in typescript where this request (for certain types) has been proposed (without a marker). Did a quick search and I couldn't find the issue, I'll hunt more later.
I just find the concept a bit 'weird' and not very clear, I generally use composition as the main tool for programming, not _decomposition_ but why not.

If you're referring to what I had written as !string|null, that would work in the current system if null was treated like {} (but was not assignable to it). If you're talking about string|null which I didn't have in my list, then I think null should be ignored in this case. Null and undefined only make sense in unions where every non-null and non-undefined is preceeded with a ! and any isn't an any (this could be a compiler warning/error).
I'm referring to the fact that when you decompose the 3 types :

  • !sting : string - null - undefined
  • string: string | null | undefined
  • ?string: string | null | undefined

The 2 latest have basically no difference, but the compiler should know that for string it should not force type-guard check and for ?string it should, that info will have to be propagate everywhere, the algorithm will be a lot more complex that it is actually, and I'm pretty sure I could find strange case with type inference.

Good question, and one that only arises if you're using the -noimplicitnull option, I think the safest option would be to assign it to which ever option is the most likely to cause an early error (probably nullable), but I get the feeling there's a better idea that I'm not thinking of. I wonder if someone else has a suggestion on how this should be approached?

Won't do it would basically introduce the same problem than @RyanCavanaugh commented about when I thought about just introducing a flag that would allow to switch from null as default to null as default.
In this case the former proposition was a lot more simpler.

Again why are you against the 'use non-null' directive, more I think about it more it seems to me the ideal solution.

@fdecampredon
Because the "use non-null" as has been currently proposed changes the way the language is _used_ not the way the language is _written_. Meaning that a function from one location when moved to another location may work differently when it's _used_. You will get compile-time errors that are potentially 1-3 files away because something is typed incorrectly.

The difference between:
string
and
?string
Is that the ? and ! symbol is asking for strict type checking for this variable (much like your "use nonnull" directive would be, but on a per-variable basis). Explain it that way and I think you won't have much trouble with people understanding it.

The difference is of course:
File 1:

//...187 lines of code down...
string myfoo(checker: boolean) {
    if(checker){
        return null;
    }
    else {
        return "hello";
    }
}

File 2:

"use nonnull"
//...2,748 lines of code down...
string myfoo(checker: boolean) {
    if(checker){
        return null; //Error!
    }
    else {
        return "hello";
    }
}

Developers now have to keep a mental map of which files are nonnull and which files aren't. I honestly believe that this is a bad idea (even if most of _your_ code is 'use nonnull').

Edit: Also when you start typing a function and you get the little window that tells you what the function definition is, how do you know if string in that definition is nullable or nonnullable?

@Griffork

  • again that is already the case with 'use strict'.
  • At least the idea is simple. and introduce simple concept in the type system.
  • If a developer move a function in an other file seems a pretty edge case for me, and since the error he will be prompted will be about null check he will quickly be able to understand what's happen ...

Again It's not perfect, but I don't see a better alternative.

Just to be clear, your only problems with what I proposed (after counter-arguments) that I can see are:

  • We don't know what the behavior for var mystring = "string" would be
  • You don't want to have to type symbols everywhere.

And my concerns with the directive are:

  • Non-nulls would not be explicit (but can occur in the same project as nullables). edit: fixed wording to make more sense.
  • It will be harder of developers to keep track of whether what they're writing is nullable or isn't nullable.
  • The function definitions that you see when you invoke a function (edit: the pop-up supplied by Visual Studio when you start typing a function) may or may not be nullable and you wouldn't be able to tell.
  • Once a function is wrapped through 2 or 3 layers, you don't know if it's definition is still correct (as you can't use type-inference through files that don't have "use nonnull").

Honestly the implementation of "use strict" is not something that should be aspired to. It was designed for JavaScript (not Typescript), and in JavaScript there are precious few alternatives. We're using Typescript, so we have the option of doing things better.
And I can't see why the directive is better than forcing developers to be explicit about their intentions (after all, that's the reason Typescript was created, wasn't it?).

Edit: Clarified a point.
Edit: Put string in quotes

Honestly your summary of my concern are a bit small, and does not reflect what I wrote, your proposition add over complexity to the type system, makes the type-check algorithm a headache, is pretty hard to specify with all the edge case it would create specially for type inference, and make code ultra-verbose for nothing. Basically not the right tool for the job.

I want all non-nulls to be explicit (since they can occur in the same project as nullables).

I want the fact that string is actually string | null | undefined without forcing you to check that, explicit.

It will be harder of developers to keep track of whether what they're writing is nullable or isn't nullable.

I doubt there will be a single file in a project without 'use non-null' if the project use nonnullity, and scrolling at the top at your file is not so hard (at least when you write file with less than 500 lines of code which is the majority of the case ...)

The function definitions that you see when you invoke a function may or may not be nullable and you wouldn't be able to tell.

Yes you will if it comes from a module that has a directive 'use non-null' thing will be typed accordingly, if you don't just consider everything as nullable ...

Once a function is wrapped through 2 or 3 layers, you don't know if it's definition is still correct (as you can't use type-inference through files that don't have "use nonnull").

I don't understand your point here.

Honestly the implementation of "use strict" is not something that should be aspired to. It was designed for JavaScript (not Typescript), and in JavaScript there are precious few alternatives. We're using Typescript, so we have the option of doing things better.
And I can't see why the directive is better than forcing developers to be explicit about their intentions (after all, that's the reason Typescript was created, wasn't it?).

TypeScript is a superset of javascript, it has his root in JavaScript and the goal is to allows you to write javascript in a safer way, so reusing a concept of JavaScript seems not unreasonable.

And I can't see why the directive is better than forcing developers to be explicit about their intentions

Because you just won't be able to achieve the same result, for all the reasons I cited having non-nullable type in a nullable type system is just a bade idea.

From my point of view there is 3 viable solution here :

  • Breaking changes for 2.0 where types become non-nullable
  • Compiler flag that switch to non-nullable type by default like I proposed few dozen of comments up, but @RyanCavanaugh had a good point about that. (event if I honestly think that it worth it)
  • 'use non-null' directive

The flag would totally be worth it to me, by the way.

function fn(x: string): number;
function fn(x: number|null): string;

function foo() {
    return fn(null);
}

var x = foo(); // x: number or x: string?

If functions are the only concern, an exception could be made in cases where an overload contains an explicit null argument - it would always take precedence over implicit null (in order to be harmonious with --noImplicitNull). This would also be the interpretation that would make sense to the user (i.e. "what I say explicitly should override what is said implicitly"). Though I do wonder if there are other similar problems with the flag that can't be solved this way. And of course this adds some hackish complexity to both the spec and the implementation :|

  1. string|null|undefined was explicit in my proposal using the flag, that's why I left it out.
  2. Then why have it per file? I'm suggesting my suggestion because it _doesn't break backwards compatibility_ which is important to me, and probably a lot of other devs. And it can be forced to be project-wide.
  3. I use a lot of files I don't create; how would I know that that particular file has "use nonnull"? If I have 100 files made by other people, I have to memorize which of those 100 files is nonnull? (or, _every time_ I use a variable/function from another file, I have to open that file and check it?)
  4. Let's see if I can make this clearer: example at the end of the post.
  5. Where do you stop? Where do you call it bad? One string or keyword at the beginning of a file should not change the way a file behaves. "use strict" was added to javascript because

    1. No large super-set of Javascript (e.g. Typescript) existed at the time that could do what it wanted.

    2. It was an attempt to speed up the processing of JavaScript (which in my eyes is the only reason it's excusable).

      "use strict" was adopted _not because it was the right thing to do_ but because it was the only way that the browsers could cater to the demands of developers. I would hate to see typescript add 1 (then 2, then 3 then 4) other directives that _fundamentally change the way the language works_ as strings that are declared in some arbitrary scope, and affect some other arbitrary scopes. It's really bad language design. I would be happy if "use strict" didn't exist in Typescript, and instead was a compiler flag (and Typescript emitted it into every file/scope that needed it).

  6. Javascript isn't a "non-nullable type system", Typescript isn't a "non-nullable type system". Introducing the ability to declare non-nullable types does not meant that the whole system is "non-nullable". It isn't the point of Typescript.
File 1:
"use notnull"
export string foo() {
    return "mygeneratedstring";
}
File 2:
export string foo() {
    return file1.foo()
}
File 3:
"use notnull"
file2.foo(); //???

You actually have the ability to lose contextual information while still using the same syntax! That's not something I look forward to having in any language I use.


We seem to be arguing in circles. If I am not making sense to you then I am sorry, but you seem to be repeatedly raising the same issues, and I am repeatedly replying to them.

@spion
In that example string isn't an implicit null (I believe it assumes that the --noImplicitNull flag is on)

string|null|undefined was explicit in my proposal using the flag, that's why I left it ou

What I mean is that in the actual system var myString: string is implicitly var myString: (string | null | undefined); and that in top of that the compiler does not force you to use typeguard unless all the other union type.

Then why have it per file? I'm suggesting my suggestion because it doesn't break backwards compatibility which is important to me, and probably a lot of other devs. And it can be forced to be project-wide.
Directive does not break backward compatibility (unless you used a string literal 'use not-null' in a position of a directive for a pretty strange reason, which I bet nobody does), Also at some point TypeScript will have to break backward compatibility, that's why semver defined major version, but that's another story.

I use a lot of files I don't create; how would I know that that particular file has "use nonnull"? If I have 100 files made by other people, I have to memorize which of those 100 files is nonnull? (or, every time I use a variable/function from another file, I have to open that file and check it?)

If you are in a project with guidelines etc etc like every project is organized normally you know ... in the worst case you just have to scroll a bit ... does not seems so hard ...

Where do you stop? Where do you call it bad? One string or keyword at the beginning of a file should not change the way a file behaves. "use strict" was added to javascript because
No large super-set of Javascript (e.g. Typescript) existed at the time that could do what it wanted.
It was an attempt to speed up the processing of JavaScript (which in my eyes is the only reason it's excusable). "use strict" was adopted not because it was the right thing to do but because it was the only way that the browsers could cater to the demands of developers. I would hate to see typescript add 1 (then 2, then 3 then 4) other directives that fundamentally change the way the language works as strings that are declared in some arbitrary scope, and affect some other arbitrary scopes. It's really bad language design. I would be happy if "use strict" didn't exist in Typescript, and instead was a compiler flag (and Typescript emitted it into every file/scope that needed it).

'use strict' was introduced to change the language behavior and maintain retro compatibility, the exact same problem we face now, and will more or less disappear with the es6 modules (and so use not null could disappear with the next major version of typescript).

Javascript isn't a "non-nullable type system", Typescript isn't a "non-nullable type system". Introducing the ability to declare non-nullable types does not meant that the whole system is "non-nullable". It isn't the point of Typescript.

That sentences does not have any sense for me, JavaScript does not have static type system, so like I said the fact that you can assign null to a variable that was before string can be easily compared to the fact that you can assign 5 to a variable that was string before.
The only difference there is that TypeScript, a type checker created for JavaScript, opinionatedly consider null as assignable to everything and not 5.

For your example yes we loose some information between files, it's an acceptable tradeoff for me but I can understand your concern.

We seem to be arguing in circles. If I am not making sense to you then I am sorry, but you seem to be repeatedly raising the same issues, and I am repeatedly replying to them.

Yes I agree, I doubt there will be any consensus on that one, I honestly think that I will just ends up forking the compiler and adding non null type for myself hoping that one day it will ends up in the main compiler, or that flow mature enough to be usable.

@Griffork by the way I think @spion idea does work, the idea is to say that without the flag string being an implicit nullable number | null would always takes prevalence on the first overload, so in both case foo(null) will be string.

@RyanCavanaugh do you think it could work ?

@fdecampredon

  1. "use strict" was introduced to optimize javascript. It had to be consumed by browsers, therefore it had to be in-code. And it had to be backwards compatible, so it had to be a non-functional statement.
    Non-nullable types does not have to be consumed by browsers, so it does not have to have an in-code directive.
  2. Sorry, I was referring to something I had forgotten to post, here it is:

According to Facebook: "In JavaScript, null implicitly converts to all the primitive types; it is also a valid inhabitant of any object type."
The reasoning for me to want non-nullable to be explicit (in every action that a developer does) is because by using non-nullable the developer is acknolwedging that they are diverging from the way javascript works, and that a type being non-nullable is not guarenteed (as there are many things that can cause this to go wrong).
A property being non-nullable is a property that can never be deleted.

I've been using Typescript for quite a long time. It was hard for me to initially convince my boss to change from Javascript to Typescript, and it was pretty hard for me to convince him to let us upgrade whenever there was a breaking change (even if there isn't it's not easy). When ES:Harmony is supported by browsers, we will probably want to be using it. If a breaking change happens in Typescript between now and when Typescript supports ES:Harmony (particularly one as prevalent as non-null able) I doubt I'd win that argument. Not to mention the amount of time and money it would cost us for me to retrain all of our devs (I am our "Typescript person", it's my job to train our Typescript developers).
Thus, I am really against introducing breaking changes.
I would not mind being able to use non-nullables myself, if there were a way that I could introduce them into future code without breaking old code (this is the reason for my proposal). Ideally eventually all of our code would be converted, but it would cost us significantly less time and money in training people and using 3rd party libraries.

I like the idea of using symbols (as per my suggestion) instead of a per-file directive, because they have the ability to propagate without contextual loss.

@Griffork Ok like I said I understand your concern, I hope you understand mine.
Now I have not the same opinion, technology changes, nothing is stable (the whole drama around angular 2 has proven that), and I do prefer to break things and have better tools than to stick with what I have, I do think that in the long term it makes me win times and money.
Now like we both concluded earlier I doubt we will have a consensus here, I do prefer the directive, you prefer your proposal.
Anyway like I said for my part I'll just try to fork the compiler and add non-null type, since I think it's not a big change in the compiler.
Thanks for the discussion it was interesting

@fdecampredon

Thanks for the discussion it was

Presuming you meant enlightening(interesting works too :D), I also enjoyed the debate :).
Good luck with your fork, and I hope we get a TS person in here to at least dismiss some of our suggestions (so we know what they definitely don't want to implement).

So to elaborate further on my proposed solution to the problem @RyanCavanaugh described, after adding the null type and the --noImplicitNull flag, the typechecker when run without the flag will have a modified behavior:

Whenever a type overload contains |null (like the given example):

function fn(x: string): number;
function fn(x: number|null): string;

the typechecker will take every overload and make new variants with (implicit) null at that argument position, added at the end of the list:

function fn(x: string): number;
function fn(x: number|null): string;
// implicit signature added at the end, caused by the first overload
function fn(x: null): number;

This results with the same interpretation as the strict --noImplicitNull version: when passed a null, the first matching overload is the explicit number|null. Effectively it makes explicit nulls stronger than implicit ones.

(Yes, I know that the number of implicit definitions grows exponentially with the number of arguments that have |null overloads - I'm hoping there is a faster implementation that "allows" nulls at a second pass if no other types match, but I don't think its going to be a problem either way as I expect this situation to be rare)

Formulated like this I think the change will hopefully be fully backward-compatible and opt-in.

Here are some more ideas about the rest of the behavior:

Assigning null or leaving values uninitialized will also be allowed for values of any type when the flag is not passed.

When the flag is turned on, leaving values uninitialized (unassigned) or null would be allowed until the time they're passed to a function or operator that expects them not to be null. Not sure about class members: they would either

  • need to be initialized by the end of the constructor, or
  • initialization could be tracked by checking whether member methods that fully initialize them were called.

Explicit null unions will behave similarly to all other unions in both modes. They would require a guard to narrow the union type.

I just read _all_ the comments above and that's a lot. It's disappointing that after so many comments and 5 months, we are still debating the same points: syntax and flags...

I think many people have made strong arguments for the value of static null checking by the compiler in large codebases. I will say nothing more (but +1).

So far the main debate has been about the syntax and the compatibility issue with tons of existing TS code and TS library definitions. I can see why introducing a null type and the Flow syntax ?string seems the cleanest and most natural. But this changes the meaning of string and hence is a _huge_ breaking change for _all_ existing code.

The solutions proposed above for this breaking change are to introduce compiler flags (@RyanCavanaugh has made a very good point why this is a terrible idea) or a per-file directive. This seems like a very hackish stop-gap measure. Moreover I would hate reading code in the middle of a diff review or huge file and not being able to immediately know _for sure_ the meaning of the code I'm looking at. 'use strict'; was cited as a precedent. It's not because TS inherited JS past mistakes that we should repeat them.

That's why I think the "not null" annotation (string!, !string, ...) is the only way to go and I will try to introduce a new argument in the discussion: maybe it's not so bad.

Nobody wants to have bangs (!) all over the code. But we probably don't have to. TS infers a lot about typing.

1. Local variables
I don't expect to annotate my local variables with bangs. I don't care if I assign null to a local variable, as long as I use it in a safe way, which TS is perfectly capable of asserting.

var x: string;  // Nullable
x.toString();  // Error: x can be undefined or null
x = "jods";
x.toUpperCase();  // Fine.
notNull(x);  // Fine

Quite often I use an initializer instead of a type.

var x = "jods";  // x: string (nullable)
notNull(x);  // Fine
x = null;  // Fine
notNull(x);  // Error

2. Function return values
Very often I do not specify the return value. Just as TS infers the type, it can infer whether nullable or not.

function() /* : string! */ {
  return "jods";
}

EDIT: bad idea because of inheritance and function variables, see comment by @Griffork below
Event if I give a return type, TS is perfectly able of adding the "not null" annotation for me.

If you want to override the inferred nullability (e.g. because a derived class may return null although your method doesn't), you specify the type explicitely:

function f() : string {  // f is nullable although this implementation never returns null.
  return "abc";
}

3. Function parameters
Just like you have to explicitely specify a type, that's where you _have to_ indicate if you don't accept nulls parameters:

function (x: string!) { return x.toUpperCase(); } // OK
function (x: string) { return x.toUpperCase(); } // Error

To avoid compilation errors in old code bases, we need a new flag "Null dereference is an error", just like we have "No implicit any". _This flag doesn't change the compiler analysis, only what is reported as an error!_
Is that a lot of bangs to add? Hard to say without doing statistics on real, large codebases. Don't forget that optional parameters are commonplace in JS and they are all nullable, so you would need those annotation if 'not null' was the default.
Those bangs are also nice reminders that null is not accepted for callers, consistent with what they know about string today.

4. Class fields
This is similar to 3. If you need to read a field without being able to infer its content, you need to mark it non-nullable at declaration.

class C {  x: string!; }

function(c: C!) // : string! is inferred
{ return c.x; } // OK, but annotations are required

We could imagine a shorthand for initializer, maybe:

class C {
  x! = "jods"; // Note the bang: x is inferred as !string rather than just string.
}

A similar shorthand is probably desirable for nullable properties if you say not-null is the default with an initializer.
Again it's hard to say if most fields in existing code are nullable or not, but this annotation is very much in line with expectations that devs have today.

5. Declarations, .d.ts
The huge benefit is that all existing libraries do work. They accept null and non-null values as inputs and they are assumed to return a possibly null value.
At first you will need to explicitely cast some return values to "not null", but the situation will improve as the definitions are slowly updated to indicate when they never return null. Annotating the parameters is safer but not required to compile successfully.

I think stating the "not-null" state of a value where the compiler can't infer it can be useful (e.g. when manipulating untyped code, or when the dev can make some assertions regarding the global state of the program) and a shorthand might be nice:

var a: { s: string } = something(); // Notice s is nullable.
notNull(a.s); // Error
notNull(<!>a.s);  // OK, this is a shorthand "not-null" cast, because the dev knows about something().

I think the ability to hint the compiler -- in a simple way -- that something is not null is desirable. This is also true for not-null by default, although it's harder to come with a simple, logical syntax.

Sorry, this was a very long post. I tried to demonstrate the idea that even with "null by default", we wouldn't have to annotate all our code: TS can infer the correctness of most code without our help. The two exceptions are fields and parameters. I think this is not excessively noisy and makes for good documentation. A strong point in favor of this approach is that it is backward compatible.

I agree with most of what you said @jods4 but I have some questions and concerns:

2) if the compiler can automatically convert nullable to non nullable on return types, how can you override this (e. g. for inheritance or functions that get replaced)?

3 & 5) If the compiler can convert nullable to non nullable then a function signature in a code file can now behave differently to the same function signature in a d.ts file.

6) How does the bang work/look when using union types?

2) Good catch. I don't think it would work well with inheritance (or 'delegates') indeed.
Where I work our experience with TS is that we almost never explicitely type return values from functions. TS figures them out and I think TS can figure out the null/not null part.

The only cases where we explicitely specify them is when we want to return an interface or a base class, e.g. for extensibility (including inheritance). Nullability seems to fit this description quite well.

Let's change my proposal above: an explicitely typed return value is _not_ inferred not nullable if it is not declared so. This works well for the cases you suggest. It provides a way to override the compiler inferring "not null" on a contract that might be null in child classes.

3 & 5) With the change above, that's not the case anymore, right? Functions signature in code and definitions now behave exactly the same way.

6) Interesting question!
This one is not great indeed, especially if you add generics to the mix. The only way out that I can think of is that all parts must be non-null. You probably need a null type literal anway because of generics, see my last example.

var x: number | string!; // compiler error
var x: number | string; // x can be null
var x: number! | string!; // x cannot be null
function f<T>() : number | T; // f can be null
function f<T>() : number! | T; // f is nullable if T is nullable
function f<T, G>(): T | G; // f is nullable if T or G is nullable
function f<T>(): T | null; // f is nullable even if T is not nullable.
function f<T>(): T!; // Whatever T is, f never returns null.

// Generics constraint option 1
function f<T!>(x: T!, y: T): T!; // T: not nullable type, x: not-null, y: null, f: not-null
function f<T!>(x: T!, y: T): T;  // T: not nullable type, x: not-null, y: null, f: null

// Generics constraint option 2
function f<T!>(x: T, y: T | null): T; // same as option 1.1
function f<T!>(x: T, y: T | null): T | null;  // same as option 1.2

For the sake of discussion, note that if you went for non-null types by default, you would need to invent new syntax as well: how to express that a generic type must not allow nullable values?
And conversly, how would you specify that even if the generic type T contains null, a function never returns T but never null?

Type literals are ok-ish but not very satisfying either: var x: { name: string }!, especially if they were to span more than one line :(

More examples:
Array of not-null numbers number![]
Not null array of numbers: number[]!
Nothing can be null: number![]!
Generics: Array<number!>[]

2, 3 & 5) I agree with the change to the proposal, that seems like it would work.
6)

function f<T>() : number | T; // f can be null
function f<T>() : number! | T; // f is nullable if T is nullable

I assume that here you mean f can return null, not that f can be assigned to null.

For unions, what do you think of the following syntax being another avialable option? (looks a bit silly, but might be more succinct/easier to follow in some cases).

var x = ! & number | string;

To signify that neither are nullable.


For type literals I would say that you can put the bang on the beginning of the literal expression as well as on the end, as var x: !{name: string}, will be easier to work with in my opinion than having it at the end.

@jods4 did you read about my "explicit null has priority over implicit null" proposed fix? That takes care of the varying semantics issue, I think.

@Griffork
Yes of course, "is nullable" was bad wording for "can return null".

Which makes me think that function f() {} is actually a declaration of f, which can be set later on... do we want to express that f will never be set to null? Like function f!() {}? That seems to be going too far in my opinion. I think the typechecker should assume this is always the case. There are other uncommon edge cases that it can't handle anyway.

Regarding the new option I agree that it looks a bit silly and introduces one additional symbol in type declaration: &, with a single purpose... That doesn't look like a good idea to me. And in several languages & AND binds with higher priority than | OR which is not what happens here.

Putting the bang in front of types is a possible alternative, I guess we should try to rewrite all examples to see what might look better. I think this field shortcut I suggested is going to be a problem:

class C {
  name! = "jods";  // Field inferred string, but marked not nullable.
  // Maybe we could do that instead, which works with "!T" convention:
  name : ! = "jods";
  // That kind of make sense with the proposed <!> cast.
}

@spion
Yes I saw it.

First I'm unsure that it really solve the issue at hand. So with your new definition which overload binds to fn(null)? The 'generated' one that returns number? Isn't that inconsistent with the fact that the second overloads also accepts null but returns string? Or is it a compiler error because you can't perform overloads resolution?

Second I'm pretty sure we can come up with lots of other issues. I think changing the global meaning of source code at the flip of a switch just can't work. Not only in .d.ts, but people love to reuse code (library reuse or even copy-pasting). Your suggestion is that the semantic meaning of my source code depends on compiler flag and that seems a very flawed proposition to me.

But if you can prove that code will compile and run correctly in all cases I'm all for it!

@jods4

In both cases the 2nd overload will be used

function fn(x: string): number;
function fn(x: number|null): string; 

because the 2nd explicit null will take precedence over implicit (first one), no matter which flag is used. Therefore the meaning doesn't change based on the flag.

Since the null type doesn't exist right now, they would be introduced together and the change will be fully backward compatible. Old type definitions will continue to work normally.

@spion
So what was the matter with the implicit function fn(x: null): number;? If the 2nd overload always takes precendence, no matter which flag is used this whole "fix" has no effect at all?

Other question: today all lib definitions have "null allowed" behavior. So until they are _all_ updated, I have to use the "implicitly null" flag. Now with this flag on, how can I declare a function that takes a non-null parameter in my project?

// null by default flag turned on because of 3rd party libs.
function (x: string)   // <- how do I declare this not null?
{ return x.toUpperCase(); }

The fix certainly has an effect. It makes the behavior with and without the flag consistent, which fixes the semantic issue mentioned above.

Secondly, you wont have to use a "null by default" flag for 3rd party libs. The most important thing about this feature is that you wont have to do _anything at all_ to get the benefits in the vast majority of cases.

Lets say that there is a library that calculates the word count for a string. Its declaration is

declare function wordCount(s: string): number;

Lets say your code uses this function this way:

function sumWordcounts(s1:string, s2:string) {
  return wordCount(s1) + wordCount(s2);
}

This program passes under both compilers: even if implicit nulls are disabled. The code is totally backward compatible.

Why? Because the compiler as it is today already has the faith that values are not null everywhere where you try to use them (even though they can theoretically be null).

The new compiler would also assume that the values are not null when you try to use them. It has no reason to believe otherwise, since you haven't specified that the value may be null, so that part of the behavior remains the same. The change in behavior only takes effect when assigning null (or leaving variables uninitialized) and then trying to pass those values to a function like the above. They don't require any changes in other parts of the code that use values normally.

Now lets look at the implementation of wordCount

function wordCount(s) {
  if (s == '') return null;
  return s.split(' ').length
}

Oops, this type doesn't tell the whole story. Its possible for that function to return a null value.

The problem is precisely that. Its impossible to tell the whole story in the current compiler, even if we wanted to. Sure, we say that the return value is a number, which implicitly states it can be null: but the compiler never warns about that when we try to access that potentially null value incorrectly. It always happily thinks its a valid number.

After the noImplicitNull change, we will get the exact same result here if we use the same type definitions. The code will compile without complaints. The compiler will still happily think that wordCount always returns a number. The program can still fail if we pass empty strings, just like before (the old compiler wont warn us that the returned number may be null, and neither will the new one as it trusts the type definition). And if we want the same behavior, we can keep it without changing anything in our code. (1)

However now we will _be able_ to do better: we will be able to write an improved type definition for wordCount:

declare function wordCount(s:string):string|null;

and get a compiler warning whenever we try to use wordCount without checking for a null return value.

The best part is that this improvement is completely optional. We can keep all type declarations as they were and get pretty much the exact same behavior as now. The type declarations wont get worse (2) - they can only be improved to be more precise in the cases where we feel that its necessary.


(1): there is already an improvement here even without making the type definition better. with the new type definitions, you wont be able to accidentally pass null to sumWordcounts and get a null pointer exception when the wordCount function tries to .split() that null.

sumWordcounts(null, 'a'); // error after the change

(2): ok, thats a lie. The type declarations will get worse for a small subset of functions: those that take nullable arguments that aren't optional arguments.

Things will still be fine for optional arguments:

declare function f(a: string, b?:string); 

but not for arguments that aren't optional and can be null

declare function f(a: string, b:string); // a can actually be null

I'd argue that those functions are pretty rare, and the fixes needed will be minimal.

@spion

The fix certainly has an effect. It makes the behavior with and without the flag consistent.

In which way, can you give a full example? You also said:

In both cases the 2nd overload will be used

Which, to me, implies that the fix has no effect?

This program passes under both compilers: even if implicit nulls are disabled. The code is totally backward compatible.

Yes it compiles without error in both cases, but not to the same result. sumWordCounts() will be typed as number! in one case and number? in the second. Changing the semantics of code with a switch is highly _not_ desirable. As demonstrated before, this change could then have global effects, e.g. on overloads resolution.

The new compiler would also assume that the values are not null when you try to use them. It has no reason to believe otherwise

No! This is the whole point: I want the compiler to throw error at me when I use a potentially null value. If it "assumes" not null like I do when I code, then this feature is useless!

The change in behavior only takes effect when assigning null (or leaving variables uninitialized) and then trying to pass those values to a function like the above.

Not sure I understand, as "function like the above" actually accepts nulls...

Reading the end of your last comment, I have the impression that we won't get real static null analysis until we turn on the switch, which you can't do until _all_ your library definitions are fixed. :(

Generally speaking it's hard to fully understand all the cases and how everything works with just words. Some practical examples would be very helpful.

Here's an example and how I would like the compiler to behave:

Assume a library with functions yes(): string! that never returns null and no(): string? that may return null.

The .d.ts definition is:

declare function yes(): string;
declare function no(): string;

I want to be able to create large projects in TS that benefit from statically checked nulls and use the existing library definitions. Moreover I want to be able to transition _progressively_ to a situation where all the libraries are updated with correct nullness information.

Using the proposed not null modifier ! above, I am able to do that:

function x(s: string!) {  // inferred : string, could be explicit if we want to
  return s.length === 0 ? null : s;  // no error here as s is declared not null
}

x(no());  // error: x called with a possibly null parameter
y(<!>yes());  // no error because of not null cast. When .d.ts is updated the cast can be dropped.

How would that work with your idea?

This thread is an overly long discussion (130 comments!), which makes very hard to follow the ideas, suggestions and problems mentionned by everyone.

I suggest that we create external gists with our proposals, including suggested syntax, what is accepted by TS, what is an error and so on.

@spion since your idea involves compiler flags, for each piece of code you should document TS behavior with the flag set and unset.

Here's the gist for !T not-null marker:
https://gist.github.com/jods4/cb31547f972f8c6bbc8b

It's mostly the same as in my comment above, with the following differences:

  • I noticed that 'not-null' aspect of function parameters can be safely inferred by compiler (thanks to @spion for that insight).
  • I included @Griffork comments, notably about inferring return values and I tried !T instead of T!.
  • I added a few sections, e.g. about constructor behavior.

Feel free to comment, fork and create alternative proposals (?T).
Let's try to move this forward.

@jods4

In both cases the 2nd overload will be used

Which, to me, implies that the fix has no effect?

  1. It implies that overload resolution is changed to make language semantics consistent for both flags.

Yes it compiles without error in both cases, but not to the same result. sumWordCounts() will be typed as number! in one case and number? in the second. Changing the semantics of code with a switch is highly not desirable. As demonstrated before, this change could then have global effects, e.g. on overloads resolution.

  1. There is no number! in my proposal. The type will be simply number in both cases. Which means overload resolution continues to work as normal, except with null values, in which case the new behavior for explicit nulls taking precedence over implicit normalizes semantics in a backward-compatible way.

No! This is the whole point: I want the compiler to throw error at me when I use a potentially null value. If it "assumes" not null like I do when I code, then this feature is useless!

  1. The point I was trying trying to express is that there is very little backward incompatibility of the feature (in terms of type declarations). If you don't use it, you get the exact same behavior like before. If you want to use it to express the possibility that a value of null type may be returned, well now you can.

Its as if the compiler used the string type for values of type object and string before, allowing all string methods on all objects and never checking them. Now it would have a separate type for Object, and you can start using that type instead to denote that the string methods aren't always available.

Not sure I understand, as "function like the above" actually accepts nulls...

Lets take a look at this function:

function sumWordcounts(s1:string, s2:string) {
  return wordCount(s1) + wordCount(s2);
}

Under the new compiler flag you wouldn't be able to call it with null values e.g. sumWordCounts(null, null); and thats the only difference. The function itself will compile because the definition of wordCounts says it takes a string and returns a number.

Reading the end of your last comment, I have the impression that we won't get real static null analysis until we turn on the switch, which you can't do until all your library definitions are fixed. :(

A vast majority of code simply doesn't really deal with nulls or undefined values, other than checking whether they're null/undefined and throwing an error to prevent the propagation of that value throughout the codebase to a completely unrelated place, making it hard to debug. There are a few functions here and there using optional arguments, which are recognizable and can probably be modeled accordingly with the new switch. Its not that often that functions return null values like my example (which I used it only to make a point about the feature, not as a representative case)

What I'm saying is that the vast majority of type definitions will remain correct, and for the few remaining cases where we need to make the fix, we could choose not to fix them if we deem the null case to be unimportant, or to change the type accordingly, if we care. Which is completely in line with typescript's goal of progressively "turning the dial up" of getting stronger guarantees whenever we need them.


Okay, so the existing type definitions are

declare function yes(): string;
declare function no(): string;

And lets say your code was written before the feature was added aswell:

function x(s: string) {
  return s.length === 0 ? null : s;
}

Under the new compiler, the behavior will be exactly the same as the old one

x(no());  // no error
x(yes());  // no error

Unless you try something that the compiler knows may be null, like the result of x()

x(x(no())) // error, x(something) may be null

You look at no() and you can either decide that the cases where it returns null are rare so you're not going to model that, or you can fix the type definition.

Now lets see what happens with your proposal: every single line of code that even touches an external library breaks. Functions that had perfectly valid type definitons like above also break. You have to update every single argument annotation everywhere and add the ! to get the compiler to stop complaining, or add null checks everywhere.

The bottom line is that the type system in TypeScript is currently wrong. It has a dual interpretation of null values:

When we try to assign null to some variable of type T, or pass null as an argument where a type T is required, it acts as if the type is T|null.

When we try to use a value of type T, it acts as if the type is T and there is no possibility of null.

Your proposal suggests treating all normal types T as T|null always; mine suggests treating them as just T. Both change the semantics. Both make the compiler behave "correctly"

My argument is that the "just T" variant is much less painful than it looks and in-tune with the vast majority of code.

edit: I just realized that your proposal may be to keep treating types without "!" or "?" the same way as before. Thats backward compatible, yes, but a lot of work to get any benefits (as the vast majority of code simply doesn't deal with null values other than checking/throwing.

@spion

1.It implies that overload resolution is changed to make language semantics consistent for both flags.

You are just stating the same fact but it's still unclear to me which case it addresses and in which way.
That's why I asked for a concrete example.

1.There is no number! in my proposal. The type will be simply number in both cases.

That's incorrect, I know the syntax is different but the underlying concepts are the same. When I say number! I am stressing out a non-null number type, which in your proposal would simply be number. And the second case would not be number in your case, but number | null.

A vast majority of code simply doesn't really deal with nulls or undefined values, other than checking whether they're null/undefined and throwing an error to prevent the propagation of that value throughout the codebase to a completely unrelated place, making it hard to debug. There are a few functions here and there using optional arguments, which are recognizable and can probably be modeled accordingly with the new switch. Its not that often that functions return null values like my example (which I used it only to make a point about the feature, not as a representative case)

I think descriptions such as this one make _lots_ of assumptions and shortcuts. That's why I think that to make progress we need to move on to more concrete examples with code, syntax and explanations to how the system works. You say:

A vast majority of code simply doesn't really deal with nulls or undefined values, other than checking whether they're null/undefined and throwing an error

Very arguable.

There are a few functions here and there using optional arguments.

Even more arguable. JS libraries are full of such examples.

, which are recognizable and can probably be modeled accordingly with the new switch

Make a more concrete proposition because this is a huge shortcut. I think I can see where this is headed for actual JS code, but how would you "recognize" an optional argument inside a declare function(x: {}); or inside an interface?

Its not that often that functions return null values like my example.

Again very arguable. Many functions return a null or undefined value: find (when the item is not found), getError (when there is no error) and so on... If you want more examples just look at standard browser APIs, you'll find _plenty_.

What I'm saying is that the vast majority of type definitions will remain correct.

As you can tell from my previous comments, I'm not convinced of that.

and for the few remaining cases where we need to make the fix, we could choose not to fix them if we deem the null case to be unimportant, or to change the type accordingly, if we care. Which is completely in line with typescript's goal of progressively "turning the dial up" of getting stronger guarantees whenever we need them.

At this point this statement doesn't seem trivial to me. Can you give concrete examples of how that works? Especially the _progressively_ part.

Now lets see what happens with your proposal: every single line of code that even touches an external library breaks. Functions that had perfectly valid type definitons like above also break. You have to update every single argument annotation everywhere and add the ! to get the compiler to stop complaining, or add null checks everywhere.

This is almost completely incorrect. Please read the gist I wrote carefully. You'll notice that actually few not-null annotations are required. But there is _one_ breaking change, yes.

Assume you take an existing code base that uses library definitions and try to compile it with my proposal without changing any code:

  • You will get a lot of null analysis benefits, even without annotations (pretty much the same way you get that in the Flow compiler).
  • A single thing will break: using the return value of a library function that you know doesn't return null.
declare function f(): string; // existing declaration, but f will never return null.
var x = f();
x.toUpperCase();  // error, possibly null reference.

The long-term fix is to update the library definitions to be more accurate: declare function f(): !string;
The short-term fix is to add a not-null cast: var x = <!>f();.
And to help large projects with lots of dependencies upgrade more easily, I suggest adding a compiler flag similar to "no implicit any": "treat possible null references as warning". This means that you can use the new compiler and wait until libraries have updated definition. Note: unlike your proposition this flag doesn't change compiler semantics. It only changes what is reported as an error.

As I said I'm not sure we're making progress in this debate. I suggest you look at my gist and create one with your own ideas, code examples and behavior under both states of your flag. As we are all coders I think it will be clearer. Also there are lots of edge cases to consider. Having done that I have tons of questions to ask for special situations but it's really no use discussing about concepts and ideas without a precise definition.

There are too many discussions about different things going on in this 130+ comments issue.

I suggest we continue _general_ discussion here.

For discussions of concrete proposals that may implement not-null types in TS, I suggest we open new issues, each about a single design proposal. I created #1800 to discuss the !T syntax.

@spion I suggest you create an issue for your design as well.

Many people want not-null types. It's easy in new languages (e.g. Rust) but it's very hard to retro-fit in existing languages (people have been asking for not-null references in .net for a long time -- and I don't think we'll ever get them). Looking at the comments here show that it's a non trivial issue.
Flow has convinced me this can be done for TS, let's try to make it happen!

@jods4 I'm sorry to say so but your proposal has nothing to do with non-null type, it's more related to some sort of control-flow analysis, hardly predictable, and that completely break retrocompatibility (in fact most of the code that is valid 1.4 ts will fail under the rule described in your gist).
I agree with the fact that the discussion is going nowhere, and that 130 comments+ is perhaps too much. But perhaps it's because there is 2 groups of people discussing here :

  • those who think that non-null type should be default and want to find a way to make that happens (through a flag or any other mechanism)
  • those who don't want non-null type and just try to avoid their introduction, or to push them in a new syntax.

Those 2 groups of peoples have stopped since a long time listening to each other arguments, and in the end what we need is the point of view of the TS team, until then I'll personally stop trying to discuss more about this subject.

@fdecampredon
Regarding the part about my gist, I copy-pasted your arguments in #1800 and if you are genuinely interested we can discuss why it's a bad design there. I don't really understand your point of view but I don't want to start discussing here, that's why I created #1800 in the first place.

Regarding this thread, I agree with you that the discussion is going nowhere...

As you can tell from my previous comments, I'm not convinced of that.

Well I really don't know how to explain it in any other way - I've already explained it a few times. Let me try one more time.

Right now you cant express that

declare function err():string;

returns null. Its impossible, because typescript will always happily let you do err().split(' '). You are unable to say "that may not be valid".

After this change, you would be able to, by changing string to ?string or string|null

But if you don't, _you don't loose anything_ that you had before the change:

You didn't have null checking before the change, and you don't have after (if you don't do anything). I argued that this is mostly backward compatible. Remains to be demonstrated on larger codebases, of course.

The difference is: you couldn't force null checking before the change, but you _can_ after (progressively, first on the definitions that you care most about, then at other places)

@fdecampredon I still discuss the subject because I feel that there are a lot of misconceptions about the issue and about how "hard" and "strict" it will be to take advantage of the feature. I suspect that a lot of the potential issues are severely overblown. We should probably try and implement a basic variant of --noImplicitNull in a fork - I'm not sure I'll be able to find the time in this next period, but if I do I'll give it a try, sounds like a fun project.

@spion, I'm all for nulls being declared with the null keyword, but what is the behaviour between that and undefined in your proposal?

And @fdecampredon my only request was for not having a large breaking change, particularly one in which errors were shown in the wrong place.

I'm guessing @spion in your current example you could pretty much continue coding Typescript the same way we do now, if you assume that a local variable if not assigned to is converted to nullable and function parameters with a ? can also be null. Well, except for that example you had earlier with the nullable early function parameters.

@spion
What I understand from your last comment is massively different from what I understood before...

So : string is "your old string type that will never be checked for nullness, like <any> is never checked for type correctness".

The new syntax : string | null is backward compatible because it didn't exist before and accessing this type without a null check is an error.

That's interesting and it's hard to grasp all the implications. I'll have to think about it for some time.

First question that comes to mind is how do you put constraints on input values (e.g. function parameters):

declare f(x: string): void;
f(null);

Is this ok or an error? If it's ok, how could I make it an error.

_I still think any idea inside this discussion is lost, don't you want to open a new issue with this idea of yours? We could discuss it there._

@jods that would be an error:

declare f(x: string): void; //string must not be null.
declare f(x: string|null): void; //string may be null (not sure about undefined here).
declare f(x?: string): void; //I assume x may be null or undefined.

@jods4 I don't think we need to create multiple issues on the one topic, then things will just get harder to track as you'll have to go to 10 different proposals just to see if anything has happened /subscribe to 10 different ones. And they'd all have to have links to the other proposals in their introductions so that everyone who goes looking for non-nullability doesn't only see & vote for one.

@fdecampredon I know there's a lot of posts and a lot of unresolved things, but that doesn't mean we should stop trying to find a solution that everyone likes. If you don't like the fact that we're still happy to talk about this and explain ourselves then you're more than welcome to unsubscribe from the topic.

@Griffork
But only after you've turned the magic flag on, right?
So with the flag off you have unchecked code, except new nullables types that didn't exist before; then when you turn the flag on, all types are non-nullable and checked?

I think the end result is probably the best solution. _But it is breaking pretty much all the code that exists today._ Imagine that I have 300K lines of code... I have to add a nullable annotation everywhere a type is actually nullable before I could turn on the flag. That's a huge deal.

If I understand correctly, this is nice for new projects (once .d.ts are updated) but very painful for existing code.

@Griffork my problem with this single issue and its long thread of comments is that it is very hard to have a focused discussion on a single proposal. Subscribing to 3 or 4 issues is not a big deal, and you have good context in each one.

@spion's original proposal had no magic flag. It was a core change to how typescript worked.

Yes, but someone who enters into this discussion should read all that came before it. If you think that people shouldn't read all that came before (and therefore potentially propose the same proposals) _then_ we can split it up; but I do not agree. The point of it being in one place is that although we were talking about different details or different possible implementations we were all talking about the same thing.
The same topic, not different topics.
All of the conversation is centralized, and none of it gets buried by not being responded to.

You can't really break down this conversation by implementation detail either, because a lot of proposals will often touch on many different details that are currently under discussion.

Splitting off the proposals from each other means that if someone spots a flaw they have to go to multiple different threads and write that. People aren't learning from each other's mistakes then, and people have the ability (which they will) to seclude themselves to one discussion and repeat the mistakes outlined in other discussions.

The point is: Splitting the conversation up will cause people to have to repeat a lot of posts because of viewers that are too lazy to read everything in all of the other linked posts (assuming that all of the relevant posts link all of the other posts).

Also @jods4 it's only a breaking change when you assign null to something.

Imo a lot less of a breaking change than your proposal, which would have every single definition that shouldn't have null be assignable to it need to be checked/touched before you can use them and get the benefits of non-nullable types.

Here is an example of what the "upgrade" process might be like to @jods4's and @spion's proposals (assuming any necessary flags are turned on):

function a(arg1: string): string;
a("mystr").toLowerCase(); //errors in @jods4's proposal because a may return null.
a("mystr").toLowerCase(); //fine in @spion's proposal.

a(null).toLowerCase(); //fine in @jods4's proposal because a may accept null.
a(null).toLowerCase(); //errors in @spion's proposal since a doesn't accept null.

Ultimately they both are breaking changes. They're both just as painful to debug, with errors occuring at variable use instead of variable declaration. The difference is that one errors when null is assigned to something (@spion), and the other errors when null isn't assigned to something (@jods4). Personally I assign null to things a lot less often than when I don't assign null to things, so @spion's is less of a change for me to work with.

Other things I like:

  • Variable's types not changing on me dynamically. If I wanted that I'd use flow:
var s: string;
s.toUpperCase(); // error
var t = (s || "test").toUpperCase(); // ok. Inferred t: string
yes(t);  // ok
t = no();
yes(t);  // error

Personally I'd want an error if I called no(), not an implicit change of type.

  • Nullables containing the word null, rather than some symbol, it's easier to read and less to remember. Really ! should be used for "not" not "not-null". I hate having to remember what a symbol does, just as I hate acronyms, I can never remember them and they significantly slow down my work speed.

@jods4 I know that the current conversation is hard for people to catch up on, but splitting it into separate issues isn't going to help, as we're going to have to raise all of these points in all of them eventually because the uninformed will ask the same questions that we did. That's really not going to _help_ the conversation, it will just make it harder to maintain and keep up with.

@Griffork honestly I read all of it, but I doubt many people will. It went into many different and unrelated sub-discussions and I lost track more than once. But let's not make this thread more heavy than it already is and leave it at that.

It's a breaking change every time you have something that is nullable (which implies it gets assigned null at some point, otherwise it would not really be nullable). The error will be reported where null is assigned / passed to a function, but the fix is at declaration. This includes function declarations, variable declarations, field declarations in classes... A lot of stuff has to potentially get reviewed and modified.

@jods4 the unrealated sub-discussions should have been branched off, not the actual core information. But it's too late to go back and change that now.

@spion I think I have a simpler solution to the @RyanCavanaugh problem with the '-nonImplicitNull' flag, the idea is pretty simple we just have to not allow ?type type|null or type|undefined without this flag.

@fdecampredon then you need two versions of every library available and kept up to date.
It would also kill our process where we have one non strict project for trying out code and running tests that depends on the code of a stricter project.

@jods4

declare f(x: string): void;
f(null);

When using the proposed --noImplicitNull flag, yes, that would be an error.

I'm not comfortable opening an official proposal just yet, I want to at lest do a few more thought experiments or maybe try to implement a simpler version of the idea in a fork.

I noticed that the !T proposal will actually break lots of code as well. Not because the semantics of existing types changed but because the new analysis will raise errors in lots of places.

So I closed it. If we break lots of code I prefer the other approach, it seems cleaner.

I am now convinced that there is no way to introduce this feature without breaking lots of existing code. This is still good for all the code yet to be written and possibly old code can continue to compile without the benefits of null checks by the compiler -- which is exactly how it is today.

I wonder what the stance of offical TS team is regarding the breaking aspect of this vs its benefits.

It's very unlikely we would ever take a break as large as "make everything non-nullable by default". If there were some proposal which would _only_ issue new errors in cases which are obviously bugs, that could be on the table, but proposals like that are hard to come by :wink:

If the scope of impact were smaller -- changes in overload resolution when passing null as an argument, for example, where the exact behavior is not necessarily immediately obvious, that could be a different story.

:+1: sane person detected: @RyanCavanaugh

How about no changes to the existing way that things work, but the introduction of a null type for unions, that forces you to guard the variable before using it (or assigning it to a non-null typed variable)? Default behaviour still allows nulls to be assigned to variables typed normally as well as variables typed with null, however a function that may return null if marked up correctly will force a cast.

This way there are no breaking changes, however with good practices you can still have a bunch of bugs picked up by the typing system with both inferred typing and good coding practices.

Then (later perhaps?) you can add a --noImplicitNullCast flag that prevents null from being assigned to variables that do not have null as part of their typing (this more or less works like @spion's suggestion with the flag enabled).

I imagine this would be not too dissimilar to all normal typing and the addition of the --noImplicitAny flag.

@Griffork
The 1st half of the package (add a null type and forbid dereference without guard) is not 100% compatible. You will break more code than you think. Consider this:

With this new capability many lib defs will be updated, including the built-in one. Let's say they change some signatures to explicitely include null as a possible return value. Some examples:

interface Array<T> {
  find(predicate: (T) => bool) : T | null;
  pop() : T | null;
}
interface Storage {
  getItem(key: string) : any | null;
}

There are many such apis: find returns undefined if no array element match the predicate, pop returns undefined if the array is empty, localStorage.getItem or sessionStorage.getItem both return null if the key was not found.

The following code is legitimate and compiled perfectly before. It will now break with an error:

var xs: string[];
if (xs.length > 0) return xs.pop().trim();  // error xs.pop() may be undefined (false positive)

var items : { id: number }[];
var selectedId : number;
// Assume we are sure selectedId is amongst items
var selectedItem = items.find(x => x.id === selectedId);
selectedItem.toString(); // error selectedItem may be undefined (false positive)

Same idea if you fetch something from localStorage that you know is there. There is plenty of code like that around and it now requires a cast (or some new syntax) to compile and tell the compiler: assume this is not null, I know what I'm doing.

That will include a few actual bugs, sure. But it will also include lots of false positives and these are breaking changes. In 100K+ LOC that's not a trivial compiler upgrade.

This probably means that the only approach that would work is: new code written from scratch or migrated old code takes full advantage of null checking; legacy code has no benefit at all. This being driven by a compiler option (--enableNullAnalysis) and with no "transition" or "progressive" migration path.

Well, yes. But you can always freeze the lib.d.ts for a project or grab an old one.
New typing will break old code, that will happen anyway. Almost all new typing when introduced requires that old code gets updated in some way or another to work correctly (e.g. unions, generics).
This change means that those who want to ignore the update or freeze their library/code definitions won't suffer.
Theoretically most code using those functions should be guarded anyway (and typically are in large codebases) - because there's lurking bugs in your code if you don't. Thus this change would more likely catch errors than cause widespread breaking changes.

Edit
@jods4 why did you use unsafe code as an example when that's exactly the kind of code that could cause an error that we're trying to detect by making this change?

Edit 2
If you don't break _some_ (wrong/unsafe) code, then there's no point to making any change to do with this topic ever.

But then again, since this is just a syntax problem, everything would still compile properly, it would just error all over the place.

_New typing will break old code, that will happen anyway._

This is not the case. Except for exceptional circumstances we would not be adding new features that break existing code. Generics and unions weren't added to lib.d.ts in such a way that requires consumers of lib.d.ts to update their codebase in order to use the new version of the compiler. Yes people could choose to use an older version of the library but it's simply not the case that we're going to take language changes that break lots of existing code (as is the case for essentially all major languages). There will be the occasional exception to this (https://github.com/Microsoft/TypeScript/wiki/Breaking-Changes) but they will be few and far between, most often if we believe the code we're breaking could only be a bug.

But you can always freeze the lib.d.ts for a project or grab an old one

Freezing lib.d.ts (and any other .d.ts I may use BTW) has very strong implications. It means I can't get any updates on the libs I use or the new HTML apis. This is not something to take lightly.

Theoretically most code using those functions should be guarded anyway (and typically are in large codebases) - because there's lurking bugs in your code if you don't.

JS has a tradition of returning undefined (or sometimes null) in a lot of cases that would throw an error in other languages. An example of this is that is Array.prototype.pop. In C# popping from an empty stack would throw. So you could say that it always returns a valid value. In Javascript popping an empty array returns undefined. If your typesystem is strict about that you have to take care of it somehow.

I knew you were going to answer that, which is why I wrote examples that are legitimate, working code. In large code bases you'll find lots of examples where an api may returns null in some situations, but you know that it's safe in your specific case (and so you ignore any safety checks).

I was looking at the micro task part of some library just an hour ago. The main part basically comes down to this:

class MicroTasks {
  queue: Array<() => void>;

  flushQueue() {
    while (queue.length > 0) {
      let task = queue.pop();
      task();  // error possible null dereference (not!)
     }
  }
}

There are lots of cases like this.

Regarding your Edit 2: yes, hopefully this change _will_ catch bugs and point out invalid code. I'm convinced it's useful and that's why I would like it to happen somehow. But the TS team will consider the breaking changes in _valid_ code. There are trade-offs that need to be decided here.

@danquirk Fair, then I guess that this feature can never be implemented.
@jods4 I understand what you're saying, but nullables can never be implemented without breaking that code.

Sad, should I finally close that issue ?

Probably.

I think if we steer clear of the nullable and non-nullable argument...

The problem can be addressed (or mitigated) in a non-intrusive way with a set of new features:

  • Introduce the null-typeguard symbol ?<type>
    If a type is annotated with this symbol then it is an error to access it directly

``` TypeScript
var foo: ?string;

foo.indexOf('s'); // Error
foo && foo.indexOf('s'); // Okay
```

  • Introduce a flag --nouseofunassignedlocalvar

``` TypeScript
var foo: string;

foo.indexOf('s'); // error

foo = 'bar';

foo.indexOf('s'); // okay
```

Interesting historical note: While trying to find a related issue for this I came across an old issue on codeplex that mentions a --cflowu compiler option back in Feb-2013. @RyanCavanaugh, I wonder whatever happened to that flag?

  • Introduce the safe navigation operator #16

TypeScript var x = { y: { z: null, q: undefined } }; console.log(x?.y?.z?.foo); // Should print 'null'

In combination, these features would help to trap more errors around the use of null without actually causing everyone a nervous breakdown.

@NoelAbrahams:
Your first proposal is essentially exactly the same as my last one, you're just using a ? instead of |null (read @jods4's post on the problems with updating lib.d.ts and breaking changes).

Your second proposal has a problem previously addressed by @RyanCavanaugh (above):

Flags that change the semantics of a language are a dangerous thing. One problem is that the effects are potentially very non-local:
...[snip]...
The only safe sort of thing to do is to keep the semantics of assignability the same and change what's an error vs what isn't depending on a flag, much like how noImplicitAny works today.

I'm fairly certain a flag not dissimilar was proposed earlier, and then abandoned because of @RyanCavanaugh's comment.

Your third proposal has very little to do with the current topic (it's no longer about typing and compile-time errors, but catching run-time errors). The reason I say this is because the reason this topic was created was to help reduce needing to do undefined or null checks on variables that are known to be "safe" (with an easy way of keeping track of that), not adding new null or undefined checks everywhere.

Could it be possible to implement one of the proposals suggested, and just not update lib.d.ts to use the nullables (and the other libs)? Then the community could maintain/use their own versions with nullables if they want?

Edit:
Specifically, all libs contain the latest information, but don't have their typing updated to require typeguards.

@RyanCavanaugh I'll come back and discuss that if/when I have a patch that demonstrates that its not that big of a change at all (especially if .d.ts files are not updated).

@danquirk type definitions wont have to be updated. They can remain "wrong" and will be mostly backward-compatible.

Anyway, it seems this issue is a lost cause.

By the way, there was a modified compiler by MSR that did this among other things - are there any papers available with their findings? Edit: found it: http://research.microsoft.com/apps/pubs/?id=224900 but unfortunately, I'm not sure its related.

@Griffork, yes, I'm sure almost everything has been discussed in one form or the other - given the length of the discussion. My summary is a practical one about mitigating the problems around null by introducing a number of related features.

not adding new null or undefined checks everywhere

The trapping of unassigned local variables mitigates this and the safe navigation operator is a proposed ES feature, so it will land in TS at some point.

I also think that probablity this gets into TS is low... :(

The only solution that I can think of is to make null safety an opt-in compiler flag. E.g. introduce the new T | null or ?T syntax but don't raise _any_ error unless the project opts-in. So new projects gets the benefits and so do old projects that choose to make their code compatible with the new feature. Large codebases that are too big to be adapted easily just don't get this feature.

That said, even so there are several issues left to make this feature fly...

@NoelAbrahams sorry, my point was not that it was a bad suggestion, just not the answer that was desired from the instigation of this discussion.
I will certainty be using that feature when(/if) it becomes native, it sounds really good.

@jods4 that has the same problem with the @RyanCavanaugh comment I quoted above (assignments error due to a problem elsewhere when a flag is changed).

Again, easiest way is to implement one of the other proposals (probably @spion's) and not add the new types to the .d.ts'.

@Griffork
Not necessarily, although this is one of the "several issues" left. The feature should be designed so that enabling the flag does _not_ change any program semantics. It should only raise errors.

For example if we go with 'not null' !T design, I don't think we have this issue. But this is far from a solve issue.

I should also point out that while T | null is a breaking change ?T is not.

@jods4 I was under the impression that "changing semantics" included "changing assign-ability", either way my (and possibly @RyanCavanaugh's) concerns about non-local effects still stand.

@NoelAbrahams how?
T | null requires a typeguard, ?T requires a typeguard, they are the same thing with different names/symbols.
Yes, you can have both of them "turned on" with a flag.
I fail to see the difference?

@Griffork, the way I see it ?T simply means "do not access without checking for null". One is free to add this annotation to new code in order to enforce the check. I'm thinking of this as a sort of operator rather than type annotation.

The annotation T|null breaks existing code because it makes a statement about whether types are nullable or not by default.

@jbondc
Interesting... I have not looked too deeply into your proposal yet but I think I will.

At the moment compiler-enforced immutability is not as interesting in JS as in other languages because its threading model prohibits shared data. But surely something to keep in mind.

@NoelAbrahams
All null-enforcing proposals are breaking changes. Even ?T that you described. I showed several examples why a few comments above.

That's why I think the only way out if we want this is to make it a compiler option, and to think this well enough so that enabling the option changes the reported errors but not program behavior. Not sure if it's doable or not.

I'm actually kind-of okay with this. With TS 1.5 I'll be sort-of able to get what I want:

function isVoid(item:any): item is void { return item == null; }
declare externalUnsafeFunction(...):string|void

function test() {
  var res = externalUnsafeFunction(...);
  var words = res.split(' '); // error
  if (!isVoid(res)) {
    var words = res.split(' '); // ok
  }
}

now the isVoid check is forced on return values for externalUnsafeFunction or any other function or function definition where I add |void to the return type.

I still wont be able to declare functions that dont accept null/undefined, but its possible to be diligent enough to remember to initialize local variables and class members.

Because @NoelAbrahams we already discussed that even with a null type, null would still have to be allowed to implicitly cast to any other type unless a future compiler flag changed that.

Also it means that in the future we can bring in a compiler flag that let's us annotate where and when a function can accept null.

And personally I hate the idea of using symbols to represent types when we can use words instead.

@spion that's a good point actually. If that works in TS currently then arguably all built in d.ts files are already "wrong", adding the null type as proposed by you with my suggested modifications won't change anything.

Actually since it's somewhat achievable already I'm going to propose we start using that this week to my boss.

I will caution that we in no way see T|void as a syntax that we would be afraid of breaking in the future. It is nearly nonsense.

@RyanCavanaugh It's about as nonsense as void === undefined (an assignable value).

Sigh.

Our code is too large to start depending on T|void if it will soon break and not be replaced. And I won't be able to convince the other programmers to use it if any week it could break.

Ok, @spion if you ever do make your patch, let me know and I'll run it against my work's codebase. I can at least give statistics about how many errors it causes.

@RyanCavanaugh nonsense, really? In what way? And what would you suggest to express nullable types that are should be forbidden for any kind of consumption before a null/undefined check?

I really have a lot of libraries that would benefit from that.

@Griffork it wont be possible to safely remove void from the union before 1.5 anyway, not without user-defined type guards, so using this will only be possible post 1.5 (if it becomes possible at all)

Nonsense as in would you ever write this code?

var foo: void;
var bar: void = doStuff();

Of course not. So then what is the meaning of adding void to a union of possible types? Note this part of the language spec that has existed for quite awhile in the section describing The Void Type (3.2.4):

_NOTE: We might consider disallowing declaring variables of type Void as they serve no useful purpose. However, because Void is permitted as a type argument to a generic type or function it is not feasible to disallow Void properties or parameters._

This question:

_And what would you suggest to express nullable types that are should be forbidden for any kind of consumption before a null/undefined check?_

is what the entire thread is about. Ryan is merely pointing out that string|void is nonsense according to the current language semantics and so it's not necessarily reasonable to take a dependency on nonsense semantics.

The code I wrote above is afaik perfectly valid TS 1.5, right?

I'll give an example that is certainly not nonsense. We have a (nodejs) database library function we call get() similar to Linq's Single() that unfortunately returns Promise<null> instead throwing an error (a rejected promise) when the item is not found. Its used throughout our codebase in thousands of places and isn't likely to be replaced or go away soon. I want to write a type definition that forces me and the other developers to use a type guard before consuming the value, because we've had dozens of hard to trace bugs stemming from a null value moving far into the codebase before being consumed incorrectly.

interface Legacy { 
  get<T>(...):Promise<T|void>
}

function isVoid(val: any): val is void { return val == null; } // also captures undefined

legacy.get(...).then(val => {
  // val.consumeNormally() is a type error here
  if (!isVoid(val)) { 
    val.consumeNormally(); // OK
  }
  else { handle null case }
});

This looks completely reasonable to me. Adding void to the union causes all operations on val to be invalid, as they should be. The typeguard narrows the type by removing the |void and leaves the T part.

Perhaps the spec doesn't take the implications of unions with void into account?

Dont disallow void variables! They work as unit type values and can be used
in expressions (unlike void in C# which requires 2 sets of everything: one
for actions and one functions). Please leave void alone, ok?
On Jan 27, 2015 8:54 PM, "Gorgi Kosev" [email protected] wrote:

The code I wrote above is afaik perfectly valid TS 1.5, right?

I'll give you an example that is certainly not nonsense. We have a
database library function similar to Linq's Single() that unfortunately
returns Promise instead throwing an error (a rejected promise) when
the item is not found. Its used throughout our codebase in thousands of
places and isn't likely to be replaced or go away soon. I want to write a
type definition that forces me and the other developers to use a type guard
before consuming the value, because we've had dozens of hard to trace bugs
stemming from a null value moving far into the codebase before being
consumed incorrectly.

interface Legacy {
get(...):Promise
}
function isVoid(val: any): val is void { return val == null; } // also captures undefined

legacy.get(...).then(val => {
// val.consumeNormally() is an error
if (!isVoid(val)) {
val.consumeNormally(); // OK
}
else { handle null case }
});

This looks completely reasonable to me. Can you point out which part is
nonsense?

Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-71767358
.

What I'm saying is that T|void is essentially meaningless right now because we already have the rule that the union of a type and its subtype is equivalent to simply the supertype, e.g. Cat|Animal is equivalent to Animal. Treating void as a stand-in for the values null/undefined isn't coherent because null and undefined are _already in the domain of T_ for all T, which implies they should be subtypes of T

In other words, T|null|undefined, if you could write that, would already be subject to collapsing into T. Treating void as an incantation for null|undefined is wrong because if that actually were the case, the compiler would have collapsed T|void into T already.

I would personally read this http://www.typescriptlang.org/Content/TypeScript%20Language%20Specification.pdf

The only possible values for the Void type are null and undefined. The Void type is a subtype of the Any type and a supertype of the Null and Undefined types, but otherwise Void is unrelated to all other types.

As; in T|void, T is specifically _not_ a supertype of void because the only supertype (according to spec) is any.

Edit: disallowing it is a different issue.

Edit2: Re-read what you said, and it makes sense literally (in the way that the documentation has worded void), but it doesn't make sense in what is assumed to be the common (or correct) interpretation of the documentation.

@RyanCavanaugh I'll use the feature anyway until it breaks, as in the meantime its likely to help me find a lot of bugs.

And I can't help but point out the irony of saying that collapsing T|null|undefined into T "makes sense" while the existence of T|void doesn't. (I know, its about the spec, but still...)

@jbondc

There are no static structural types in ECMAScript, so that argument is invalid. In ECMAScript, all values are of type any therefore anything can be assigned to anything or passed to anything. That fact should not imply anything about TypeScript's static type system, which is there to improve upon the non-existent one in JS

Its a bit sad that I have to explain the irony, but here goes:

  1. you have a value that supports no methods and no properties, null
  2. you make a type system where this value assignable to variables of any type, most of them supporting various methods and properties (e.g. strings numbers or complex objects var s:string = null)
  3. you make a void type that correctly accepts that value and allows no methods or properties, and as a consequence T|void allows no methods and properties.

The claim is that (3) is nonsense, while (2) isn't. Do you see the part that is ironic here? If not, I'll try another example:

  1. you have values that support only some methods and properties {}
  2. you make a type system where this value is assignable to variables of any type (e.g. to strings numbers or complex objects var s:string = {}) that have a much larger set of methods and properties
  3. you have an {} type that correctly accepts {} values and allows no methods or properties other than a few built in ones, and T|{} naturally allows no methods or properties other than the built-in ones of {}

Now which one is nonsense, (2) or (3) ?

What is the only difference between the two examples? A few built in methods.

Finally, the interface was just a type definition description. Its not an interface that is being implemented anywhere. Its the interface provided by the library, which never returns anything other than promises, but may return a promise for null. Therefore the safety is definitely very real.

@spion that's exactly what I was thinking.

Here, I think this will clear things up a little:
According to the Spec:


What is void?

The Void type, referenced by the void keyword, represents the absence of a value and is used as the return type of functions with no return value.

So, void is not null|undefined.

The only possible values for the Void type are null and undefined

So, null and undefined are assignable to void (as they are assignable to pretty much anything else).


Null is a type.

3.2.5 The Null Type
The Null type corresponds to the similarly named JavaScript primitive type and is the type of the null
literal.
The null literal references the one and only value of the Null type. It is not possible to directly reference the Null type itself.


Undefined is a type.

3.2.6 The Undefined Type
The Undefined type corresponds to the similarly named JavaScript primitive type and is the type of the undefined literal.


Void is only a Supertype of the Null and Undefined _types_

. The Void type is a subtype of the Any type and a supertype of the Null and Undefined types, but otherwise Void is unrelated to all other types.


So @jbondc according to the Typescript Language Specification, null is a value, Null is a type, undefined is a value, Undefined is a type, and Void (lower-case when in code) is a type that is a Supertype of Null and Undefined but _does not implicitly cast_ to any other type (except any). In fact, Void is specifically written to have different behaviour to the Null and Undefined types:

Void:

The Void type is a subtype of the Any type and a supertype of the Null and Undefined types, but otherwise Void is unrelated to all other types.

Null:

The Null type is a subtype of all types, except the Undefined type. This means that null is considered a valid value for all primitive types, object types, union types, and type parameters, including even the Number and Boolean primitive types.

Undefined:

The undefined type is a subtype of all types. This means that undefined is considered a valid value for all primitive types, object types, union types, and type parameters.

Clearer now?

What we're asking for is a type that forces a guard for nulls, and a type that forces a guard for undefineds. I'm not asking for null or undefined to not be assignable to other types (that's too breaking), just the ability to opt-in to extra mark-up. Heck, they don't even have to be called Null or Undefined (the types).

@jbondc this is the irony: No matter what the TS compiler says, the null and undefined values in JS will never support any methods or properties (other than throwing exceptions). Therefore, to make them assignable to any type is nonsense, about as much nonsense as allowing the value {} to be assigned to strings. The void type in TS contains no values, _but_ null or undefined are assignable to everyone, and therefore can be assigned to variables of type void, so there is another bit of nonsense right there. And thats the irony - that (via type guards and unions) both currently combine into something that actually makes sense :)

Something as complex as the TS compiler was written without guards, that's something to think about.

There is pretty big application written in PHP or JAVA, that does not make the language better or worse.
Egyptian Pyramides have been built without any modern machine, does that mean that everything we have invented in the past 4500 past year is crap ?

@jbondc See my comment above with the list of issues in the TypeScript compiler caused by null and undefined values (minus the first one on the list)

@jbondc it's not supposed to solve the world's problems, it's supposed to be another tool available at developer's disposal.
It would have the same impact and utility as function overloads, unions and type aliases.
It is another opt - in syntax that allows developers to more accurately type values.

One of the big issues mentioned for making types non-nullable is what to do about existing TypeScript code. There are two issues here: 1) Making existing code work with a newer compiler that supports non-nullable types and 2) Interoperating with code that hasn't been updated - avoiding a Python 2/3 style quagmire

It should be possible though to provide a tool which performs an automated rewrite of existing TS code to make it build and for this limited case, something that works a lot better than say 2to3 did for Python.

Perhaps a gradual migration could look something like this:

  1. Introduce support for '?T' syntax to indicate a possibly null type. Initially this has no effect on type checking, it is just there for documentation.
  2. Provide a tool which auto-rewrites existing code to use '?T'. The dumbest implementation would convert all uses of 'T' to '?T'. A smarter implementation would check whether the code that used the type implicitly assumed it was non-null and use 'T' in that case. For code which tries to assign a value of type '?T' to a parameter or property expecting 'T', this could wrap the value in a placeholder which asserts (at compile time) that the value is non-null - eg let foo: T = expect(possiblyNullFoo) or let foo:T = possiblyNullFoo!
  3. Apply the tool to all the provided type definitions and those in the DefinitelyTyped repository
  4. Introduce a compiler warning when trying to assign '?T' to a slot expecting a 'T'.
  5. Let the warnings bake in the wild and verify that porting is managable and see how the change is received
  6. Make assigning '?T' to a slot expecting a 'T' an error, in a major new release of the compiler.

Perhaps it might also be worth introducing something that would disable the warnings for specific modules to ease use of code which the tool has not yet been applied to.

In the previous comment, I'm assuming that there would be no mode to choose nullability of plain 'T' to avoid splitting the language and that steps 1-3 would be useful on their own even if step 6 never proves practical.

My random thoughts:

  1. The expect() part may be complicated, it would need to be well-thought. There are not only assignments, but interfaces, generics or parameter types...
  2. I _think_ TS does not have any warning, only errors. If I'm right, I'm not sure they will introduce warnings just for that feature.
  3. Even if many people upgrade their code, enterprises may be very slow to do so, or may even never want to do so in the case of large existing codebases. This makes that major release a huge breaking change, something that the TS team doesn't want.

That said, I'm still thinking that an "opt-in" flag to report null errors is an acceptable solution. The issue is that the same code/definitions should have the same behavior (minus the errors) when the flag is off, something I haven't had the time to think through yet.

@jods4 - For the expect() part, my thinking is that wherever you have ?T the compiler sees T | undefined so you could re-use the rules for handling union types as far as possible.

I agree I'm not sure how realistic a 'flag day' is for TypeScript. Aside from an 'opt-in' flag, it could also be an opt-in flag that eventually becomes an opt-out flag. The important thing is to have clear direction about what the idiomatic style is.

Something else relevant to this discussion - https://github.com/rwaldron/tc39-notes/blob/master/es6/2015-01/JSExperimentalDirections.pdf discusses investigations by the Chrome team into using type info (specified via TypeScript-style annotations) in the compiler to optimize JS. There is an open question in there about nullability - they would like non-nullable types but aren't sure about feasibility either.

Just gonna chime back in here. I'm surprised we haven't really resolved this one.

I say there is a value add here. One that is only applicable at compile time.
Instead of having to write more code to null check a value, being able to declare a value as not nullable can save time and coding. If I describe a type as 'non-nullable' then it has to be initialized and possibly asserted as non-null before being passed on to another function that expects non-null.

As I said above, maybe this could simply be solved by a code contracts implementation.

Why can't you just quit using the null literal and guard all places where
nulls can leak into your code? This way you will be safe from null
reference exception without having to do checks for null everywhere.
On Feb 20, 2015 1:41 PM, "electricessence" [email protected] wrote:

Just gonna chime back in here. I'm surprised we haven't really resolved
this one.

I say there is a value add here. One that is only applicable at compile
time.
Instead of having to write more code to null check a value, being able to
declare a value as not nullable can save time and coding. If I describe a
type as 'non-nullable' then it has to be initialized and possibly asserted
as non-null before being passed on to another function that expects
non-null.

As I said above, maybe this could simply be solved by a code contracts
implementation.

Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-75295204
.

Null checks are significantly faster than undefined checks.
On 21/02/2015 11:54 AM, "Aleksey Bykov" [email protected] wrote:

Why can't you just quit using the null literal and guard all places where
nulls can leak into your code? This way you will be safe from null
reference exception without having to do checks for null everywhere.
On Feb 20, 2015 1:41 PM, "electricessence" [email protected]
wrote:

Just gonna chime back in here. I'm surprised we haven't really resolved
this one.

I say there is a value add here. One that is only applicable at compile
time.
Instead of having to write more code to null check a value, being able to
declare a value as not nullable can save time and coding. If I describe a
type as 'non-nullable' then it has to be initialized and possibly
asserted
as non-null before being passed on to another function that expects
non-null.

As I said above, maybe this could simply be solved by a code contracts
implementation.

Reply to this email directly or view it on GitHub
<
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-75295204>
.


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-75346198
.

My point that those checks are unnecessary (if the null keyword is banned
and all other places where it can come in are guarded).

Why? 1. The code without checks at all runs significantly faster than code
with checks for nulls. 2. It contains less if's and therefore more readable
and maintainable.
On Feb 20, 2015 7:57 PM, "Griffork" [email protected] wrote:

Null checks are significantly faster than undefined checks.
On 21/02/2015 11:54 AM, "Aleksey Bykov" [email protected] wrote:

Why can't you just quit using the null literal and guard all places where
nulls can leak into your code? This way you will be safe from null
reference exception without having to do checks for null everywhere.
On Feb 20, 2015 1:41 PM, "electricessence" [email protected]
wrote:

Just gonna chime back in here. I'm surprised we haven't really resolved
this one.

I say there is a value add here. One that is only applicable at compile
time.
Instead of having to write more code to null check a value, being able
to
declare a value as not nullable can save time and coding. If I
describe a
type as 'non-nullable' then it has to be initialized and possibly
asserted
as non-null before being passed on to another function that expects
non-null.

As I said above, maybe this could simply be solved by a code contracts
implementation.

Reply to this email directly or view it on GitHub
<
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-75295204

.

Reply to this email directly or view it on GitHub
<
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-75346198>
.

Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-75346390
.

You're missing the point, we use nulls to avoid undefined checks.
And since our application does not have one consistent state, we do, and
often have to have things be null / undefined instead of an object to
represent that.
On 21/02/2015 12:20 PM, "Aleksey Bykov" [email protected] wrote:

My point that those checks are unnecessary (if the null keyword is banned
and all other places where it can come in are guarded).

Why? 1. The code without checks at all runs significantly faster than code
with checks for nulls. 2. It contains less if's and therefore more readable
and maintainable.
On Feb 20, 2015 7:57 PM, "Griffork" [email protected] wrote:

Null checks are significantly faster than undefined checks.
On 21/02/2015 11:54 AM, "Aleksey Bykov" [email protected]
wrote:

Why can't you just quit using the null literal and guard all places
where
nulls can leak into your code? This way you will be safe from null
reference exception without having to do checks for null everywhere.
On Feb 20, 2015 1:41 PM, "electricessence" [email protected]
wrote:

Just gonna chime back in here. I'm surprised we haven't really
resolved
this one.

I say there is a value add here. One that is only applicable at
compile
time.
Instead of having to write more code to null check a value, being
able
to
declare a value as not nullable can save time and coding. If I
describe a
type as 'non-nullable' then it has to be initialized and possibly
asserted
as non-null before being passed on to another function that expects
non-null.

As I said above, maybe this could simply be solved by a code
contracts
implementation.

Reply to this email directly or view it on GitHub
<

https://github.com/Microsoft/TypeScript/issues/185#issuecomment-75295204

.

Reply to this email directly or view it on GitHub
<
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-75346198

.

Reply to this email directly or view it on GitHub
<
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-75346390>
.


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-75347832
.

You are missing my points too. In order to keep your state consistent and be able to represent missing values you don't need to use nulls or undefined. There are other ways, especially now with union types support consider:

class Nothing { public 'i am nothing': Nothing; }
class Something { 
    constructor(
    public name: string,
    public value: number
   ) { }
}
var nothing = new Nothing();
var whoami = Math.random() > 0.5 ? new Something('meh', 66) : nothing;

if (whoami instanceof Nothing) {
    console.log('i am a null killer');
} else if (whoami instanceof Something) {
    console.log(whoami.name + ': ' + whoami.value);
}

So we just encoded a missing value without using a null or undefined. Also note that we are doing it in 100% explicit manner. Thank to type guards, there is no way one can miss a check prior to reading a value.

How cool is that?

Do you think that an instance of check is faster than a null check, for an
application that has to do hundreds of these per second?
Let me put it this way: we've had to create aliases for functions because
it's faster to not use '.' accessors.
On 21/02/2015 12:58 PM, "Aleksey Bykov" [email protected] wrote:

You are missing my points too. In order to keep your state consistent and
be able to represent missing values you don't need to use nulls or
undefined. There are other ways, especially now with union types support
consider:

class Nothing { public 'i am nothing': Nothing; }
class Something {
constructor(
public name: string,
public value: number
) { }
}
var nothing = new Nothing();
var whoami = Math.random() > 0.5 ? new Something('meh', 66) : nothing;

if (whoami instanceof Nothing) {
// whoami is nothing
console.log('i am a null killer');
} else if (whoami instanceof Something) {
// whoami is an instance of Something
console.log(whoami.name + ': ' + whoami.value);
}

So just got encoded a missing value without using a null or undefined.
Also note that we are doing it in _100% explicit manner_. Thank to type
guards, there is no way one can miss a check prior to reading a value.

How cool is that?


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-75349967
.

@aleksey-bykov Note that I addressed that point already: the need to model type definitions for existing libraries that already use null values isn't solved. In that case, you can't just "quit using" null values.

Also, that doesn't even touch upon issues with uninitialized variables that will be undefined. In Haskell, the problem doesn't even exist because you can't leave a value "uninitialized".

@Griffork, I don't think, I run tests, tests say it depends on the browser.
http://jsperf.com/nullable-vs-null-vs-undefined-vs-instanceof
What I think though is that you need to find a balance between the safety and performance. Then you need to measure your performance carefully before trying to optimize it. Chances are you are fixing something that is not broken and your null checks that you so concerned about might constitute less than 2% of the overall performance, if so if you make it double faster you will only gain 1%.

@spion, all things considered it is a much nicer way than keep using nulls in your code, given the current situation and problems non-nullables bring with them

Our code base is over 800 TypeScript files and over 120000 lines of code
and we never needed nulls or undefined when it came to modeling business
domain entities. And although we had to use nulls for DOM manipulations,
all such places are carefully isolated, so that nulls have no way to leak
in. I don't buy your argument that nulls might be needed, there are
production ready languages without nulls at all (Haskell) or with null
banned (F#).
On Feb 22, 2015 9:31 AM, "Jon" [email protected] wrote:

@aleksey-bykov https://github.com/aleksey-bykov From a practical
perspective, you can't ignore null. Here's some background on why you'd
need a nullable type anyway:
https://www.google.com/patents/US7627594?ei=P4XoVPCaEIzjsATm4YGoBQ&ved=0CFsQ6AEwCTge

Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-75437927
.

Yep, and the most important thing you can understand here is that we're NOT modelling business domain entities, we ARE HIGHLY time sensitive, and our codebase is not too much smaller than yours.

For your purpose you don't need nulls, which is fine, you can go and continue using Typescript without them.
For our purpose we need nulls, and we can't afford not to.

Oh, just saw your other comment, sorry.
If you read a little earlier in this thread, you'll notice that I've said (essentially) that we've always used nulls, and we've rarely had any problems with them seeping into other bits of code.

If you also read above, the non nullable proposal also covers undefined.

Null and undefined in variables for their speed look similar, but as soon as they're present on an object (particularly one with a prototype chain) the story is very different, so we use nulls instead.

Again, probably not worth worrying about, unless (like us), you're doing 10,000 checks in <10ms.

(And yes, we actually changed due to tracking the performance of our application and finding a bottleneck)

I would like to spur some discussion about this. Just yesterday in Build video I saw analyzers implemented in C#. I opened an issue #3003 to implement similar to TypeScript.

Basically analyzer is a library that can do additional syntax checks, tag lines as errors/warnings, and are interpreted by language service. This would effectively allow to enforce null checks with external library.

In fact I see the analyzers in TypeScript would be really great for other things too. There are really weird libraries out there, more so than to C#.

For anyone following this issue: I'm maintaining a library called monapt that lets you declare the nullability of a variable. It includes Typescript definitions, so everything is compile-time checked.

Feedback welcome!

I just read through the whole thread.

I guess the proposal should use non-voidable and voidable terms, because void in TS spec is a type for the null and undefined value. (I assume the proposal also covers undefined value :smile: )

I also agree access undefined or null exception is root of all evil in programming adding this feature will save countless hours for people.

Please add this! And it would be best if this became a feature I could pass a flag to make it the default. It would save a lot of debugging hell for a lot of people.

While I can agree that ADTs can be used in place of nullable types; how can we be sure that the ADT we pass isn't null itself? Any type in TS can be null, including ADTs, right? I see it as a major flaw in thinking about ADTs as a solution to the problem of non-nullable (non-voidable) types. ADTs only work as well as they do in languages which allow (or forbid) non-nullable types.

Another key issue is library definitions. I might have a library which expects a string as the argument, but the TS compiler can type check passing null to the function! That's just not right...

I see it as a major flaw in thinking about ADTs as a solution to the problem...

  1. get ADT's available through a design pattern (due to the lack of native
    support from TypeScript)
  2. forbid the null keyword
  3. guard all places where nulls can leak in to your code from the outside
  4. enjoy the null problem gone for good

On our project we managed to do steps 1, 2 and 3. Guess what we are doing
now.

@aleksey-bykov

How do you do (2) with undefined? It can arise in a variety of situations that don't involve the undefined keyword: uninitialized class members, optional fields, missing return statements in conditional branches...

Compare typescript with flowtype. At least flow deals with most of the cases.

Also, if you're willing to say "Forget about the common idioms in the language. I don't care, I will insulate myself from the rest of the world", then you're probably better off using purescript.

It can arise in a variety of situations that don't involve the undefined keyword...

  • uninitialized class members - classes forbidden
  • optional fields - optional fields/parameters forbidden
  • missing return statements - a custom tslint rule that makes sure that all execution paths return

better off using purescript

  • i wish but it the code it generates concerns performance the least

@aleksey-bykov its funny that you mention performance, because replacing nullable types with ADTs impose significant cpu and memory overhead compared to just enforcing null checks (like flowtype). Same goes for not using classes (can be prohibitively expensive if you instantiate lots of objects with lots of methods on them)

You mention giving up optional fields but they're used by so many JS libraries. What did you do to deal with those libraries?

You also mention not using classes. I suppose you also avoid libraries that rely (or will rely) on classes (like e.g. React)?

In any case, I don't see a reason to give up on so many features when a perfectly reasonable solution (that actually fits the underlying untyped language) is most likely possible to implement.

its funny that you think that the sole purpose of ADT's is for representing a missing value that would have been otherwise encoded by null

i'd rather say that having the "null" problem gone (and classes too) is a byproduct of all good things ADT's bring us

addressing your curiosity, for performance critical pieces like tight loops and large data structures we use so called Nullable<a> abstraction by fooling TypeScript into thinking that it deals with a wrapped value, but in fact (at runtime) such value is just a primitive that is allowed of taking nulls, there is special set of operations on Nullable's that prevents that null from leaking

let me know if you want to know the details

interface Nullable<a> {
    'a nullable': Nullable<a>;
    'uses a': a;
}
/*  the following `toNullable` function is just for illustration, we don't use it in our code,
    because there are no values capable of holding naked null roaming around,
    instead we just alter the definition of all unsafe external interfaces:
    // before
    interface Array<T> {
       find(callback: (value: T, index: number, array: T[]) => boolean, thisArg?: any): T;
    }
    // after
    interface Array<a> {
       find(callback: (value: a, index: number, array: a[]) => boolean, thisArg: any): Nullable<a>;
    }
*/
function toNullable<a>(value: a) : Nullable<a> {
    return <any>value;
}
function toValueOrDefault<a>(value: Nullable<a>, defaultValue: a)  : a {
    return value != null ? <any>value : defaultValue;
}

Why use a recursive type structure? 'a nullable': a should be sufficient, no?

I suppose you also have callFunctionWithNullableResult<T,U>(f: (T) => U, arg:T):Nullable<U> (including all other overloads for functions with different arity) to deal with all external functions that return nulls?

And if the null keyword is forbidden by a linter, how do you get a "Nothing" stored in a nullable? do you special-case toNullable(null)?

I know what algebraic datatypes are. Can you give me an instance (other than exhaustiveness checks and the clunkiness) where ADTs (or rather, sum types) + pattern matching can do something that typescript's unions + user defined type guards cannot do?

Why use a recursive type structure? 'a nullable': a should be sufficient, no?

well, in this example it is enough, but doing it like this overloads the purpose of such field, which serves 2 distinct cases:

i personally hate overloaded meaning this is why i go with 2 separate pseudo fields where each solves a separate problem

I suppose you also have callFunctionWithNullableResult(f: T -> U, arg:T) => Nullable

no, as i said, we alter the definition (*.d.ts) files by replacing anything that we have evidence can be null in the external code with a Nullable "wrapper", see the example in the comment in my previous message

how do you get a "Nothing" stored in a nullable

we don't store nulls in nullables, nullables come from the server (which we have limited control of) or from the external API's, as soon as it is in our code it is wrapped in Nullable, but our code is not capable of producing a nullable at will, we can only transform a given nullable into another nullable, but not to create one out of thin air

typescript's unions + user defined type guards cannot do

ha! i am not a fan of union types at all and can talk for hours of what makes them a bad idea

back to your question, give me a reusable equivalent of Either a b using unions and type guards
(a hint: https://github.com/Microsoft/TypeScript/issues/2264)

The types in your example aren't more nominal just because they're recursive - they remain "pseudo-nominal".

To answer your question though: you don't implement a generic Either, precisely because types are not nominal. You simply use

type MyType = A | B

and then use type guards for A and B... As a bonus this works with every existing JS library too: e.g. the idiomatic way to check for Thenable is if there is a then method attached to the object. Will a type guard work for that? Most certainly.

Ha! That's a damn good catch of yours. Well, something must have changed in the recent versions of the compiler. Here is the bulletproof nominal:

enum GottaBeNonimalBrand {}
interface GottaBeNonimal {
     branded: GottaBeNonimalBrand;
}
function toGottaBeNominal() : GottaBeNominal {
   return <any> {};
}

Wait a second! You cheated! The good old way still works like a charm.

To answer your question though: you don't implement a generic Either, precisely because types are not nominal. You simply use...

that's nice what you say, how is it Either though? that authentic Either is GENERIC and there are some 30 standard convenience functions around it, now you are saying i need to go resolved (nominal) and have as many as 30 x number_of_combinations_of_any_2_types_in_my_app of those functions for all possible pairs i might encounter?

Hopefully this thread is still monitored by people from the TS team. I want to try to reboot it!
It seems to me that non-null types got stuck when it became obvious that it was impossible to introduce this in a non-breaking way. The C# team is thinking about non-nullable reference types and is of course faced with the same problems, notably compatibility. I like the ideas that they are exploring at the moment and I wonder if they would be applicable to TS.

C# vnext brainstorm (briefly)

Basically, in C# you have non-null value types, such as int. Then in C# 2.0 we got the nullable value types, such as int?. References have always been nullable, but they are thinking about changing this and applying the same syntax: string is never null, string? may be.

Of course, the big issue is that this changes the meaning of string and breaks code. Where it was ok to assign/return null to a string, it is now an error. The C# team is thinking of "disabling" those errors unless you opt-in. Dotting into string? is always an error, because the syntax is new and was not allowed before (hence it's not a breaking change).
The other problem is with other libraries, etc. To address that, they want to make the opt-in flag per file or assembly. So external code that opts into the stricter type system gets the benefits, and old legacy code and libraries continue to work.

How could that transpose to TS?

  1. We introduce a non-null types opt-in flag. It could be global (passed to the compiler) or per-file. It should always be per-file for .d.ts.
  2. Basic types are non nullable, e.g. number. There is a new null type.
  3. number? is simply a shortcut for number | null.
  4. The feature introduces new errors for nullable types, like (x: string?) => x.length without a type guard.
  5. The feature introduces new errors for non-nullable types, like assign let x: string = null;
  6. When manipulating a non-null type that is declared outside of the opt-in scope, errors reported under 5. are ignored. _This is slightly different than saying string is still considered a string?._

What good is this?

When I write new code, I can opt-in and get complete static null checks in my code and in updated library definitions. I can continue to use legacy library definitions without errors.

When I use old code, I can opt-in for new files. I can declare string? anyway and get errors when accessing members without a proper null check. I also get benefits and strict checks for libraries whose definitions have been updated. Once a .d.ts opts in, passing null to a function defined as length(x: string): number becomes an error.
_(Note: Passing string? is an error as well, although passing string from opt-out code is not.)_

As long as I don't opt-in, there is no breaking change.

What do you think?

Of course, there's a lot to consider regarding how the feature would work in practice. But is such an opt-in scheme something that the TS team might even consider?

it's amazing how people want a faster horse instead of an automobile..

No need to be condescending. I have read your comments about ADT and I don't think they are what I need. We may discuss that if you want to, but in a new issue about "ADT support in TS". You may write there why they are great, why we need them and how they alleviate the need for non-nullable types. This issue is about non-nullable types and you're really hijacking the topic.

what you are talking about is called constant propagation and, if implemented, should not be limited only by 2 random constants that happened to be null and undefined. Same can be done to strings, numbers and whatever else and this way it makes sense, otherwise it would be just for nurising a few old habits some people are not willing to quite

@aleksey-bykov and of course then to get better type restrictions you implement higher kinded types, and to get derived instances you implement typeclasses and so on and so forth. How does that fit with the "align with ES6+" design goal of TypeScript?

And its still all useless in solving the original problem from this issue. Its easy to implement Maybe (and Either) in TS using generic classes, even today (without union types). It will be about as useful as adding Optional to Java was, which is: barely. Because someone will just assign null somewhere or need optional properties, and the language itself wont complain but things will explode at runtime.

Its unclear to me why you insist on abusing the language to get it to bend in ways it was never designed to. If you want a functional language with algebraic datatypes and no null values, just use PureScript. Its an excellent, well designed language with features that fit coherently together, battle-tested over the years in Haskell without the accumulated warts of Haskell (principled typeclass hierarchy, real records with row polymorphism...). Seems to me that its much easier to start with a language that adheres to your principles and needs then tweak for performance, rather than start with something that doesn't (and was never meant to) then twist, turn and restrict to get it to work your way.

What is your performance problem with PureScript? Is it the closures produced by pervasive currying?

@jods4 does that mean that all currently existing .d.ts files will need to be updated with an out-of-work flag, or that the project-wide setting doesn't affect .d.ts (and a .d.ts without a flag is always assumed to be the 'old' way.
How understandable is this (is it going to cause problems for people working on multiple projects or confusion when reading .d.ts files)?
What will the interface between new and old (library) code look like. Will it be littered with a bunch of (in some cases) unnecessary checks and casts (probably not a bad thing but it might turn beginners off the language)?
Mostly I (now) like the idea of non-nullable types, and the suggestion by @jods4 and the ! Suggestion is the only way I see non-nullable types working (ignoring adts).

It maybe lost in this thread, but what problem does this solve? That is something that has alluded me. The arguments I have seen recently in this thread are "other languages have it". I am also having a hard time seeing what this solves that is useful that doesn't require some sort of machinations on TypeScript. While I can an understand that null and undefined values can be undesirable, I tend to see this as a higher order logic that I want to put into my code versus expecting the underlying language to handle it.

The other minor nit, is that everyone is treating undefined as a constant or a keyword. Technically it isn't either. In the global context, there is a variable/property undefined which is of the primitive type undefined. null is a literal (which also happens to be of the primitive type null, though it has been a long standing bug that it is implemented to return typeof "object"). In ECMAScript, these are two things are quite different.

@kitsonk
The problem is that all types can be null, and all types can be undefined but not all operations that work on a specific type don't work on null or undefined values (e. g. divide) causing errors.
What they want is the ability to specify a type as "is a number and is never null or undefined" so that they can ensure that when using a function that has the possibility of introducing null or undefined programmers know to guard against invalid values.
There are two main arguments for being able to specify a value as "is this type but can't be null or undefined" are that:
A) it's obvious to all programmers using the code that this value must never be null or undefined.
B) the compiler helps catch missing type guards.

there is a variable/property undefined which is of the primitive type undefined. null is a literal (which also happens to be of the primitive type null

@kitsonk this is a great argument for this issue; null is not a number, it's no more a Person than a Car is. It's its own type and this issue is about allowing typescript make this distinction.

@kitsonk I think one of the most common error is null reference/access undefined errors. Having the capability to specify a property or variable to not being null/undefined will make software code more sound. And also let tools such as LS to catch bugs like this one https://github.com/Microsoft/TypeScript/issues/3692.

Thanks for the clarification. Maybe I am missing something here again too, having looked back through this... Since there is already an "optionality" argument in TypeScript, why would the following not work as a proposal?

interface Mixed {
    optional?: string;
    notOptional: string;
    nonNullable!: string;
}

function mixed(notOptional: string, notNullable!: string, optional?: string): void {}

While there is greater functionality in some of the proposed solutions, I suspect type guarding against them becomes a challenge, where as this (though I don't know the inner workings of TypeScript well) seems like an extension of what is already being checked.

Since there is already an "optionality" argument in TypeScript, why would the following not work as a proposal?

Just to clarify the optionals right now in TS are only optional during initialization. Null and undefined is still assignable to all types.

These are some unintuitive cases of optionals

interface A {
   a?: string;
}
let a: A = {} // ok as expected

interface B {
   b: string;
}
let b1: B = { b: undefined } //ok, but unintuitive
let b2: B = { b: null } // ok, but unintuitive
let b3: B = {} // error as expected

Or to just keep it simple optional in TS means non-initializable. But this proposal says that undefined and null cannot be assigned to a non-voidable(non-null, non-undefined) type.

The ? is placed by the property/argument name because it refers to the _existence_ of that name or not. @tinganho explained it well. Extending his example:

function foo(arg: string, another?: string) {
  return arguments.length;
}

var a: A = {};
a.hasOwnProperty('a') // false
foo('yo') // 1, because the second argument is _non-existent_.
foo(null) // this is also ok, but unintuitive

The non-null ! is unrelated to _existence_ of a name. It is related to the type, which is why it should be placed by the type. The type is non-nullable.

interface C {
  c: !string;
}

let c1: C = { }; // error, as expected
let c2: C = { c: null }; // error, finally!
let c3: C = { c: 'str' }; // ok! :)

function bar(arg: !string) {
  return arg.length;
}

bar(null) // type error
bar('foo') // 3

@Griffork Existing .d.ts would work fine and can be updated smoothly to support more strict definitions.
By default .d.ts do not opt-in. If a library author updates its library with faithful non-null definitions, it indicates so by setting a flag at the top of the file.
In both cases everything just works without any thinking / configuration from the consumer.

@kitsonk The problem this solves is reducing bugs because people assume something is not null when it might be, or pass nulls into functions that don't support it. If you work on large projects with many people you may consume functions that you did not write. Or you may consume 3rd party libraries that you don't know well. When you do: let x = array.sum(x => x.value), can you assume that x is never null? Maybe it's 0 if array is empty? How do you know? Is your collection allowed to have x.value === null? Is that supported by the sum() function or is it going to be an error?
With this feature those cases are statically checked and passing a possibly null value to a function that doesn't support it or consuming a possibly null value without a check are reported as errors.
Basically, in a completely typed program, there can be no "NullReferenceException" anymore (minus corner cases related to undefined, unfortunately).

The discussion about optional parameters is unrelated. Allowing null or not is related to the type system, not the variable / parameter declarations and it integrates nicely with type guards, union and intersection types, etc.

type _void_ doesn't have any values but both null and undefined values are assignable to it

here is a good summary: https://github.com/Microsoft/TypeScript/issues/185#issuecomment-71942237

@jbondc I think I checked the spec recently. void is an umbrella term for null and undefined. Don't know about void 0, but I guess void also has it under its umbrella.

Infers any. but null and undefined is still assignable to all types.

void 0 always returns undefined also http://stackoverflow.com/a/7452352/449132.

@jbondc if the question was about a potentiel null-aware type system such as the one I described above,

function returnsNull() {
  return null;
}

should infer the type null.

There are lots of corner cases around undefined and to be honest I am not (yet?) sure about what is the best way to tackle it. But before spending a lot of time thinking about it I would like to know if the TS team would be OK with an opt-in strategy.

Last time I took part in this discussion it ended with: "we won't do breaking changes and we don't want to change the meaning of source code based on ambient flags/options". The idea I describe above is not a breaking change and is relatively good with respect to source interpretation, although not 100%. That grey area is why I ask what the TS team thinks.

@jods4 I would prefer the inferred nullable any, which would keep the existing syntax. And an opt-in strategy would be preferable. [1]

I would prefer an explicit syntax for non-nullable types. For objects, it would break a few too many things all at once. And also, optional arguments should always be nullable for obvious reasons (and it should be impossible to change). Default arguments should retain their current signature externally, but inside the function itself, the argument may be made non-nullable.

I do find this absurd, though, since nullable numbers breaks a lot of the optimizations engines can do, and it's not a common use case outside of optional/default arguments. It might be a little more feasible/non-breaking to make numbers that aren't optional arguments to be non-nullable by default via a compiler flag. This would also require a syntax to explicitly mark nullable types, but I'm already seeing that as a likely result of this discussion.

[1] Introducing a flag to make this the default wouldn't work well in the first place for practical reasons. Just look through the issues people ran into with --noImplicitAny, which resulted in some migration hazards, a lot of practical issues in testing property existence leading to --suppressImplicitAnyIndexErrors, and several broken DefinitelyTyped definitions.

@impinball

I would prefer the inferred nullable any, which would keep the existing syntax. And an opt-in strategy would be preferable.

I have edited my answer here. Thinking more about this, I just don't see any useful case where you can implicitely infer null type only. Like: () => null or let x = null or function y() { return null }.

So I agree continuing to infer any is probably a good thing.

  1. It's backward compatible.
  2. All those cases are errors under noImplicitAny anyways.
  3. If you have a weird case where you want to do that and it's useful to you, you can explicitely declare the type: (): null => null or let x: null = null or function y(): null { return null }.

I would prefer an explicit syntax for non-nullable types. For objects, it would break a few too many things all at once.

Saying that string actually means string! | null may be another option. It should be noted that if you read all the discussions above, this was at first proposed because there was hope that it would be more compatible with existing code (not changing the current meaning of string). But in fact the conclusion was that even so, huge amounts of code would be broken anyway.

Given that adopting a non-nullable type system is not going to be a trivial switch for existing codebases, I would choose the string? option over string! because it feels cleaner. Especially if you look at the TS compiler source code, where pretty much all internal types would be inaccurately named :(

It might be a little more feasible/non-breaking to make numbers that aren't optional arguments to be non-nullable by default via a compiler flag. This would also require a syntax to explicitly mark nullable types, but I'm already seeing that as a likely result of this discussion.

I think things are getting confusing at this point. For me this seems like a good argument to use string? everywhere. I wouldn't like to see mixing up number? and string!, this would get too confusing too quickly.

[1] Introducing a flag to make this the default wouldn't work well in the first place for practical reasons.

Yes it wouldn't work well for an existing code base without changes. But:

  1. It will work great for new code bases. We can build a better future with this.
  2. Existing code bases will reap _some_ benefits and additional checks, even if they never opt-in.
  3. Existing code bases can opt-in in a per-file basis, which allows new code to be more strict and may allow progressive transition of older code, if desired.

@jods4

I think things are getting confusing at this point. For me this seems like a good argument to use string? everywhere. I wouldn't like to see mixing up number? and string!, this would get too confusing too quickly.

I didn't mean it in that sense. I meant there's less of a use case for nullable numbers than for nullable strings. I'm not that attached to that suggestion, anyways (I wouldn't care either way).

Existing code bases can opt-in in a per-file basis, which allows new code to be more strict and may allow progressive transition of older code, if desired.

With what means? I don't believe you can currently configure the compiler on a file-by-file basis. I'm open to be proven wrong here.

@impinball

With what means? I don't believe you can currently configure the compiler on a file-by-file basis. I'm open to be proven wrong here.

Maybe I am wrong. I've never used them but when looking at test cases I see lots of comments looking like // @module amd. I've always assumed that it was a way to specify options from inside a ts file. I've never tried to do it, though, so maybe I'm completely wrong! Look for example here:
https://github.com/Microsoft/TypeScript/blob/master/tests/cases/conformance/externalModules/amdImportAsPrimaryExpression.ts

If this is not a way to specify per-file options, than we might need something new. It is required to specify this option per-file, _at least_ for .d.ts files. A special formatted comment at the top of the file might do, after all we already have support for /// <amd-dependency /> and co.

EDIT: I am wrong. Spelunking the source code has shown me that those things are called markers and pre-parsed by the FourSlash runner. So it's only for tests, it's not inside the compiler. Something new would need to be figured out for that.

There are also practical limitations in some cases - it's not the most practical for some arguments like ES6 features, as it would require completely re-parsing the file. And IIUC the type checking is done in the same pass as the AST creation in the first place.

If you look at the parser code, the /// <amd-dependency /> and /// <amd-module /> comments are read before anything else is parsed in a file. Adding something like /// <non-null-types /> would be feasible. OK that's terrible naming and I'm not for the proliferation of arbitrary "magic" options but that's just an idea.

@jods4 I think @I'm pinball is saying that IDEs won't support it without large changes to how the syntax highlighting works.

Will there be any form of none-nullable type in 2.0 ?
After all, typescript brings very less things to the javascript if it can't guarantee the type.What we can get after overcome all the problems and rewrites introduce by typescript?
Just better IntelliSense? Since you still need to check the type manually or by code in every function export to other modules.

Good news everyone, TypeScript 1.6 has enough expressive power to model and safely track a missing value (which otherwise are encoded by null and undefined values). Essentially the following pattern gives you non-nullable types:

declare module Nothing { export const enum Brand {} }
interface Nothing { 'a brand': Nothing.Brand }
export type Nullable<a> = a | Nothing;
var nothing : Nothing = null;
export function isNothing(value: Nullable<a>): value is Nothing {
    return value == null;
}
var something = Math.random() > 0.5 ? 'hey!' : nothing;
if (isNothing(something)) {
    // missing value
    // there is no way you can get anything out of it
    // there is also NO WAY to get a null reference exception out of it
    // because it doesn't have any methods or properties that could be examined
    // it is 100% explicit and typesafe to use
} else {
    // value is present, it is 100% GUARANTEED being NON-NULL
    // you just CANT get a null reference exception here either
    console.log(something.toLowerCase());
}

/** turns any unsafe values into safe ones */
export function sanitize<a>(unsafe: a) : Nullable<a> {
    return unsafe;
}

var safe = sanitize(toResultFromExternalCodeYouCannotTrust()); // <-- 100% safe to use

With that said the request should be closed because no problem exists anymore. Case closed, class dismissed.

@aleksey-bykov

With that said the request should be closed because no problem exist anymore.

You can't be serious, can you? We've told you multiple times that a pattern for nullable algebraic type was not was this issue was about.

I am _not_ going to wrap every single var x: number in my code into a Nullable<number> or even better NonNullable<x>. That's just too much overhead. Yet I want to know that doing x *= 2 is 100% safe, anywhere this happens in my code.

Your code doesn't solve the problem of using 3rd party library. I am _not_ going to call sanitize on _every_ 3rd party library, or even built-in DOM api, that I call. Moreover I don't want to add isNothing(safe) _all over the place_. What I want is being able to call let squares = [1,2,3].map(x => x*x) and be 100% sure at compile time that squares.length is safe. With _any_ API that I use.

Similarly I want documentation for 3rd party JS libraries that a function accepts null as input or not. I want to know at compile time if $(element).css(null) is OK or an error.

I want to work in team environments, where I cannot ensure that everyone uses complex patterns such as yours consistently. Your Nulllable type does absolutely nothing to prevent a dev from doing let x: number = null; x.toString() (or something less stupid, but to the same effect).

And so on and so forth. This ticket is _far_ from closed and the problem is still 100% there.

You can't be serious, can you?

I am quite serious.

I am not going to wrap every single...

Why not? With ! syntax or whatever else you guys are pushing, you would have to do it anyway.

or even better NonNullable

It should be Nullable what we are modelling are nullable types which can have a non-nullable value or null and it is explicitly stated. Contrary to conventional types which can ALWAYS have a value or null and be called number implying you should check it for null before using.

That's just too much overhead.

Where?

Your code doesn't solve the problem of using 3rd party library.

It does.

I am not going to call sanitize on every 3rd party library, or even built-in DOM api, that I call.

You don't have to. All you need to do is to fix the definition file of that 3rd party lib by replacing everything that can be null with Nullable<*>

Similarly I want documentation for 3rd party JS libraries that a function accepts null as input or not.

Same thing. Define that method as accepting Nullable<string> instead of a plain string.

Your Nulllable type does absolutely nothing to prevent a dev from doing let x: number = null; x.toString()

It does not indeed. You need to ban the null keyword using the linter.

Come on my solution is 95% working and 100% practical and it is available TODAY without having to make difficult decisions and getting everyone on the same page. It's the question what you are after: a working solution that doesn't look what you expect but nevertheless works, or getting it precisely the way you want with all back jack and cherries on top

In 1.4 and 1.5, the void type does not allow any members of Object including like .toString(), so type Nothing = void; should be sufficient instead of needing the module (unless this changed again in 1.6). http://bit.ly/1OC5h8d

@aleksey-bykov thats not really true. You still have to be careful to sanitize all values that are possibly null. You won't get a warning if you forget to do that. Its an improvement, for sure (less places to be careful at).

Also, to illustrate how hacks wont really get you very far, just try

if (isNothing(something)) {
  console.log(something.toString())
}

I am not going to wrap every single...

Why not? With ! syntax or whatever else you guys are pushing, you would have to do it anyway.

OK, I _could_ do that if everything else is solved. And if everything is solved, maybe TS can even consider some language sugar for that.

or even better NonNullable

It should be Nullable what we are modelling are nullable types which can have a non-nullable value or null and it is explicitly stated. Contrary to conventional types which can ALWAYS have a value or null and be called number implying you should check it for null before using.

It's not that I just want to get rid of null exceptions. I want to be able to express non-nullable types as well, with static safety.
Say you have a method that always return a value. Like toString() always returns a non null string.
What are your options there?
let x: string = a.toString();
This is not good because there is no static validation that x.length is safe. In this case it is, but as you said the built-in string might well be null so that is the _status quo_.
let x = sanitize(a.toString());
OK now I cannot use it without a null check so the code _is_ safe. But I do not want to add if (isNothing(x)) everywhere I use x in my code! It's both ugly and inefficient, as I very well could know that x is not null at compile-time. Doing (<string>x).length is more efficient but it's still ugly as hell to have to do that anywhere you want to use x.

What I want to do is:

let x = a.toString(); // documented, non-null type string (string! if you want to)
x.length; // statically OK

You cannot achieve this without proper language support, because all types in JS (and TS 1.6) are always nullable.

I would like to say that programming with non-optional (nullable) types is a very good practice, as much as you possibly can. So what I'm describing here is an essential scenario, not an exception.

That's just too much overhead.

Where?

See my previous answer.

Your code doesn't solve the problem of using 3rd party library.

It does.

Only half the problem is solved. Assuming that library definitions where updated as you proposed: every nullable input or returned value is replaced by Nullable<X> instead of X.

I can still pass null to a function that doesn't accept null parameters. OK this fact is now _documented_ (which we can already do with JSDoc comments or whatever) but I want it _enforced_ at compile-time.
Example: declare find<T>(list: T[], predicate: (T) => bool). Both parameters should not be null (I haven't used Nullable) yet I can do find(null, null). Another possible error is that the declaration says I should return a non-null bool from the predicate, yet I can do: find([], () => null).

Your Nulllable type does absolutely nothing to prevent a dev from doing let x: number = null; x.toString()

It does not indeed. You need to ban the null keyword using the linter.

Doing so and providing a nothing global variable instead will not help you with the cases I listed in the precedent point, will it?

From my point of view this is not 95%. I feel like the syntax got a lot worse for many common things and I still don't have all the static safety that I want when I talk about non-nullable types.

You still have to be careful to sanitize all values that are possibly null.

Just the same sort of separation job you would have to do anyway with hypothetical non-nullable types you guys talking about here. You would have to review all your definitions files and put ! where appropriate. How is that different?

You won't get a warning if you forget to do that.

Same thing. Same thing.

illustrate how hacks wont really get you very far, just try

Although you right with the way I defined Nothing which has toString from Object, but.. if we take the idea of @Arnavion (of using void for Nothing) it all suddenly clicks

Watch

image

From my point of view this is not 95%. I feel like the syntax got a lot worse for many common things and I still don't have all the static safety that I want when I talk about non-nullable types.

man, you just convinced me, this pattern is not for you and never will be, please don't ever use it in your code, keep asking for the real non-nullable types, good luck, sorry for bothering you with such nonsense

@aleksey-bykov
I see what you are trying to do, although it seems to me you haven't ironed out all cases yet and you seem to be making up solutions along the way when someone points out a hole (like the substitution of null by void in the last example).

In the end _maybe_ you'll get it to work the way I want to. You'll use a complex syntax for nullable types, you'll have banned with a linter any source of null in the program which may eventually make nullable types non-nulls. Along the way you'll have to convince everyone to update their libraries using your non-standard convention, otherwise it's just no use (mind you, this is true of _all_ solutions to the null problem, not just yours).

At the very end you may succeed in twisting the type system to avoid the built-in null type and have the checks we want attached to your Nothing type.

At this point, don't you feel like the language should just support it? The language could implement everything pretty much the same way you want to (internally). But on top of that you will have nice syntax, edge cases ironed out, no need for external linters and surely a better adoption by 3rd party defintions. Doesn't that make more sense?

At this point, don't you feel like the language should just support it?

Let's talk about what I think. I think that it would been nice to wake up in the world where TypeScript has non-nullable types tomorrow. As the matter of fact this the thought I go to bed with every night. But it never happens the next day and I get frustrated. Then I go to work and hit the same problem with nulls over and over until recently when I decided to look for a pattern that can make my life easier. It looks like I found it. Do I still wish we had non-nullables in TypeScript? Sure thing I do. Can I live without them and frustration? Looks like I can.

Do I still wish we had non-nullables in TypeScript? Sure thing I do.

Then why do you want to close this issue? :smiley:

With that said the request should be closed because no problem exists anymore. Case closed, class dismissed.

I am glad that you found a solution for your needs but like you I still hope that one day TS gets some proper support. And I believe that this may still happen (not soon, mind you). C# team is currently doing the same brainstorms and maybe this may help move things forward. If C# 7 manages to get non-null types (which is not sure yet) there is no reason TS might not do the same.

@aleksey-bykov

So to simplify

var nothing: void = null;
function isNothing<a>(value: a | void): value is void {
    return value == null;
}
var something = Math.random() > 0.5 ? 'hey!' : nothing;

and now you can use it without any function wrappers, too - proper type definitions are sufficient.

Thats basically the workaround that originally made me kinda happy until being discouraged from relying on it.

You would have to review all your definitions files and put ! where appropriate. How is that different?

The main difference when using standard, official language features instead of hacks is that the community (DefinitelyTyped) is there to help write, test and share the definition files instead of all that extra work being piled up on top of your projects (well bottom really :smile:).

I'm advocating for "--noImplicitNull". That would make all types default to non-nullable, immediately giving feedback about potential problems in any existing code. If there is "!", it should behave like Swift to make the typechecker ignore the possibility of nulls (unsafe cast to non-nullable) - although thats in conflict with ES7+ "eventual send" operator on promises so it might not be the best idea.

If that seems too radical in terms of backward-compatibility breakage, thats really my fault. I should really try to find the time and try it in a fork to prove that it isn't really as radical as it looks. Infact adding "!" is more radical: it would require more changes to take advantage of it as most values aren't really null; --noImplicitNull is more gradual as you will discover more errors by fixing type definitions that incorrectly claim non-null values and without those fixes you will get the same behavior except for null assignment, uninitialized values and forgotten checks for optional object fields.

Adding my voice to the discussion, I like how Kotlin has solved the issue, i.e.

  • own variables have not nullable types by default, a '?' changes that
  • external variables are nullable, a '!!' overrides
  • a null check is a type cast
  • external code can be analyzed by a tool that generates annotations

Compare:
1
2

  • if a|void is banned one can switch to a|Nothing

The main difference when using standard, official language features instead of hacks is that the community (DefinitelyTyped) is there to help write, test and share the definition files

  • can't see why using 100% legit features is a hack
  • don't overestimate the skills of the community, many (if not most) definitions there are of low quality and even those like, say, jquery quite often can't be used as written without a few alterations
  • again, a pattern won't ever be as good as a full-fledged feature, hope everyone gets it, no doubt
  • a pattern can solve your problems today whereas a feature may or may not be near
  • if a situation can be efficiently (up to 95%) solved by employing A B and C why would anyone need D to do the same? craving some sugar? why would not we focus on something that cannot be solved by a pattern?

anyway, there are problem solvers and perfectionists, as was shown the problem is solvable

(If you don't want to rely on the void type, Nothing can also be class Nothing { private toString: any; /* other Object.prototype members */ } which will have the same effect. See #1108)

_Note: nothing here is about bikeshedding the syntax. Please don't take it
as such._

I'll put it simply: here's the problem with the TS 1.6 solution proposed
here: it can feasibly work, but it doesn't apply to the standard library or
most definition files. It also doesn't apply to math operators. If you want
to complain about redundant type/runtime boilerplate, try to do generic
collections work in C++ - this pales in comparison, especially if you want
lazy iteration or collections that aren't arrays (or even worse, try
combining the two).

The problem with the solution currently proposed by @aleksey-bykov is that
it's not enforced in the language core. For example, you can't do
Object.defineProperty(null, "name", desc) - you'll get a TypeError from
the null target object. Another thing: you can't assign any property to a
null object, like in var o = null; o.foo = 'foo';. You'll get a
ReferenceError IIRC. This proposed solution cannot account for those cases.
This is why we need language support for it.


_Now, a little bit of mild bikeshedding..._

And as for the syntax, I like the conciseness of the "?string" and
"!string" syntax, but as far as the end result, I don't care as long as I'm
not writing ThisIsANullableType<T, U, SomeRidiculousAndUnnecessaryExtraGeneric<V, W>>.

On Wed, Jul 29, 2015, 16:10 Aleksey Bykov [email protected] wrote:

  • if a|void is banned one can switch to a|Nothing

    The main difference when using standard, official language features
    instead of hacks is that the community (DefinitelyTyped) is there to help
    write, test and share the definition files

    can't see why using 100% legit features is a hack

    don't overestimate the skills of the community, many (if not most)
    definitions there are of low quality and even those like, say, jquery quite

    often can't be used as written without a few alterations

    again, a pattern won't ever be as good as a full-fledged feature, hope

    everyone gets it, no doubt

    a pattern can solve your problems today whereas a feature may or may

    not be near

    if a situation can be efficiently (up to 95%) solved by employing A B
    and C why would anyone need D to do the same? craving some sugar? why would
    we focus on something that cannot be solved by a pattern?

anyway, there are problem solvers and perfectionists, as was shown the
problem is solvable


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-126082053
.

And could we at least get an idea what the bike shed structure would look like before we obsess over what color to paint it? Most of what I've seen so far after the first ~10 comments is "Hey, we want to build a bike shed! What color should we paint it?" Nothing about making sure the design is structurally sound. (See #3192 and the first half of #1206 for a couple other examples of this - most of the noise died down when I finally made a serious proposal with a logically created syntax and fully specified semantics.)

Keep in mind: this will very likely result in a major refactor and partial rewrite of the standard library type definitions. It will also result in a majority of the type definitions on DefinitelyTyped needing rewritten for the first version of TS that supports this. So keep in mind this will definitely be breaking. (A solution could be to always emit by default, even when null-related errors occur, but to provide a flag a la --noImplicitAny to change that behavior.)

Admittatedly I am a bit over my head here. Yet, I searched scholar.google for "nullable javascript"and "nullability javascript" :
Understanding TypeScript
has a nice table of types in TypeScript (module generics).

Dependent Types for JavaScript
seems to be important. source code vanished

Trust, but Verify: Two-Phase Typing for Dynamic
Languages

Refinement Types for Scripting Languages
offers a solution based on typescript.
("The shorthand t? stands in for t + null." ; sounds like #186 )

@afrische Much of this is already being used practically in JS type checkers. Flow uses most of the idioms in "Trust, but Verify", for example. There's also Infernu, a WIP JS type checker that relies largely on Hindley-Milner type inference 1 to infer its types. But I digress...

[1] Haskell, OCaml, etc. use a modified version as well for their type systems.

I am currently toying with a TS fork that assumes types are not nullable by default and that makes the (existing) Null type referenceable with null, e.g. string | null. I also added syntax for string? which is merely a shortcut for the same union type.

When you think deeply about it, it's the same thing as @aleksey-bykov, but where null is the Nothing type, built-in.

And that is not even complicated to do because the type system is already quite good at handling all of this!

Where things get hairy is that we want to have a smooth transition path. We need some backward compatibility, _at least_ with existing definition files (the compatibility with the project itself could be an opt-in flag, although it would be better if the meaning of a piece of code -- say on the internet -- didn't depend on global compiler flags).

@afrische's idea of using string? in your own project but using string! in the .d.ts could be a new approach. Although I don't quite like the arbitrary duality that it creates. Why is string nullable and some files and non-nullable in others? Seems weird.

If prefer not having string nullable in some files and not in others.
I like @impinball's idea of having an opt-out flag in the compiler and introducing it as a breaking change (gasp: big change from my earlier arguments). Assuming that the new syntax won't cause errors when using the opt-out flag of course.

This thread is over a year old and it's hard to figure out what all of the relevant designs and concerns are with nearly 300 comments and only few from the TS team themselves.

I am planning to migrate a large codebase to Flow, Closure Compiler or TypeScript but the lack of null safety in TypeScript is a real deal breaker.

Without having read the whole thread I can't quite figure out what's wrong with adding both ! and ? nullability specifiers while maintaining the existing behaviour for types lacking them, so here that is as a proposal:

Proposal

declare var foo:string // nullability unspecified
declare var foo:?string // nullable
declare var foo:!string // non-nullable

With ?string a superset of !string, and string both a superset and subset of both, i.e. the relationship between string and both ?string and !string is the same as the relationship between any and all other types, i.e. a bare type without ! or ? is like an any with respect to nullibility.

The following rules apply:

| Type | Contains | Provides | Can assign null? | Can assume not null? |
| --- | --- | --- | --- | --- |
| T | T, !T, ?T | T, ?T, !T | yes | yes |
| ?T | T, !T, ?T | T, ?T | yes | no (type guard required) |
| !T | T, !T | T, ?T, !T | no | yes |

This provides null safety without a breaking existing code, and enables existing codebases to introduce null safety incrementally, just like any enables existing codebases to introduce type safety incrementally.

Example

Here is some code with a null reference error:

function test(foo:Foo) { foo.method(); }
test(null);

This code still passes. null is still assignable to Foo and Foo can still be assumed to be non-null.

function test(foo:!Foo) { foo.method(); }
test(null);

Now the test(null) is in error, since null is not assignable to !Foo.

function test(foo:?Foo) { foo.method(); }
test(null);

Now the foo.bar() is in error, since a method call on ?Foo is disallowed (i.e. the code must check for null).

Thank you! And I really like this idea. Although, for compatibility sake,
could we make ?T and T just aliases, like Array<T> and T[]? It
would simplify things, and the undecorated version is still technically
nullable.

On Fri, Aug 28, 2015, 07:14 Jesse Schalken [email protected] wrote:

This thread is over a year old and it's hard to figure out what all of the
relevant designs and concerns are with nearly 300 comments and only few
from the TS team themselves.

I am planning to migrate a large codebase to Flow, Closure Compiler or
TypeScript but the lack of type safety in TypeScript is a real deal breaker.

Without having read the whole thread I can't quite figure out what's wrong
with adding both ! and ? nullability specifiers while maintaining the
existing behaviour for types lacking them, so here that is as a proposal:
Proposal

declare var foo:string // nullability unspecifieddeclare var foo:?string // nullable,declare var foo:!string // non-nullable

With ?string a superset of !string, and string both a superset and subset
of both, i.e. the relationship between string and both ?string and !string
is the same as the relationship between any and all other types, i.e. a
bare type without ! or ? is like an any with respect to nullibility.

The following rules apply:
Type Contains Provides Can assign null? Can assume not null? T T, !T, ?T T,
?T, !T yes yes ?T T, !T, ?T T, ?T yes no (type guard required) !T T, !T T,
?T, !T no yes

This provides null safety without a breaking existing code, and enables
existing codebases to introduce null safety incrementally, just like any
enables existing codebases to introduce type safety incrementally.
Example

Here is some code with a null reference error:

function test(foo:Foo) { foo.method(); }test(null);

This code still passes. null is still assignable to Foo and Foo can still
be assumed to be non-null.

function test(foo:!Foo) { foo.method(); }test(null);

Now the test(null) is in error, since null is not assignable to !Foo.

function test(foo:?Foo) { foo.method(); }test(null);

Now the foo.bar() is in error, since a method call on ?Foo is disallowed
(i.e. the code must check for null).


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-135743011
.

@impinball

Thank you! And I really like this idea. Although, for compatibility sake, could we make ?T and T just aliases, like Array<T> and T[]? It would simplify things, and the undecorated version is still technically nullable.

The _whole idea_ is that is that T, ?T and !T are three distinct things, and that _is_ for compatibility sake (compatibility with existing TS code). I don't know how to explain it any better, sorry. I mean, I made a little table and everything.

Okay. Good point. I misinterpreted the table on that part, and I overlooked
the case of existing definition files switching, causing problems with
other applications and libraries.

On Fri, Aug 28, 2015, 11:43 Jesse Schalken [email protected] wrote:

@impinball https://github.com/impinball

Thank you! And I really like this idea. Although, for compatibility sake,
could we make ?T and T just aliases, like Array and T[]? It would
simplify things, and the undecorated version is still technically nullable.

The _whole idea_ is that is that T, ?T and !T are three distinct things,
and that _is_ for compatibility sake (compatibility with existing TS
code). I don't know how to explain it any better, sorry. I mean, I made a
little table and everything.


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-135810311
.

sorry for being a smart ass, but this doesn't make sense

?string a superset of !string, and string both a superset and subset of both

from the set theory we know that a set A is a subset and a superset of a set B is when and only when A = B

if so, when you say "of both" it can only mean everything-of-?string = everything-of-string and everything-of-!string = everything-of-string

so ultimately everything-of-?string = everything-of-!string

from where everyone gets off, the bus isn't going anywhere, the bus is out of sevice

@aleksey-bykov Probably a case of poor phrasing. I think he means that string is more like ?string | !string, taking the most permissive constraints.

This whole thing is similar in scope to Java's increasingly common @Nullable and @NonNull/@NotNull compile-time type annotations, and how they work with unannotated types.

And I would definitely be in favor of a new flag for types being implicitly taken as non-nullable, particularly primitives.

A "non-nullable by default" flag would be nice, but it would also break a large number of DefinitelyTyped definitions. This includes Angular's and Node's definitions there, and it would require a lot of tedious work to fix. It may also require a backport so the new types still parse not as syntax errors, but the nullability is not checked. Such a backport would be the only practical way to mitigate breakage with such a flag, especially as definitions are updated for nullable types. (People still use TypeScript 1.4 in development, especially in larger projects.)

@impinball

People still use TypeScript 1.4 in development, especially in larger projects.

I think the compatibility story for this feature is already hard enough without trying to make new definitions compatible with old compilers (FWIW non-null types are almost trivial to add to current TS compiler).

If people stay on old TS then they should use old definitions. (I know that's not practical).
I mean things are going to break anyway. Soon TS 1.6 will add intersection types and definitions that use it won't be compatible either.

BTW I'm surprised people stay on 1.4, there's so much to love in 1.5.

@aleksey-bykov

sorry for being a smart ass, but this doesn't make sense

I assume you're calling yourself a "smart ass" because you know perfectly well from the rest of the post what "superset and subset of both" is intended to mean. Of course it's nonsensical if applied to actual sets.

any can be assigned any type (it behaves as a superset of all types) and treated as any type (it behaves as a subset of all types). any means "opt out of type checking for this value".

string can be assigned from both !string or ?string (it behaves as a superset of them) and can be treated as both !string and ?string (it behaves as a subset of them). string means "opt out of nullability checking for this value", i.e. current TS behaviour.

@impinball

And I would definitely be in favor of a new flag for types being implicitly taken as non-nullable, particularly primitives.

@RyanCavanaugh explicitly said:

Flags that change the semantics of a language are a dangerous thing. [...] It's important that someone looking at a piece of code can "follow along" with the type system and understand the inferences that are being made. If we starting having a bunch of flags that change the rules of the language, this becomes impossible.

The only safe sort of thing to do is to keep the semantics of assignability the same and change what's an error vs what isn't depending on a flag, much like how noImplicitAny works today.

That's why my proposal maintains the existing behaviour for types lacking a nullability specifier (! or ?). The _only_ safe flag to add would be one which disallows types lacking a nullability specifier, i.e. it only takes passing code and causes it to error, it doesn't change its meaning. I assume noImplicitAny was allowed for the same reason.

@jesseschalken

I meant that in the context of implicitly typed variables not initialized
to null or undefined in cases like these:

var a = new Type(); // type: !Type
var b = 2; // type: !number
var c = 'string'; // type: !string
// etc...

Sorry for the confusion.

On Fri, Aug 28, 2015, 19:54 Jesse Schalken [email protected] wrote:

@impinball https://github.com/impinball

And I would definitely be in favor of a new flag for types being
implicitly taken as non-nullable, particularly primitives.

@RyanCavanaugh https://github.com/RyanCavanaugh explicitly said:

Flags that change the semantics of a language are a dangerous thing. [...]
It's important that someone looking at a piece of code can "follow along"
with the type system and understand the inferences that are being made. If
we starting having a bunch of flags that change the rules of the language,
this becomes impossible.

The only safe sort of thing to do is to keep the semantics of
assignability the same and change what's an error vs what isn't depending
on a flag, much like how noImplicitAny works today.

That's why my proposal maintains the existing behaviour for types lacking
a nullability specifier (! or ?). The _only_ safe flag to add would be
one which disallows types lacking a nullability specifier, i.e. it only
takes passing code and causes it to error, it doesn't change its meaning. I
assume noImplicitAny was allowed for the same reason.


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-135916676
.

@jeffmcaffer what you are trying to say by altering the original meaning of words superset and subset can still be articulated in terms of sets easily: the set of values of string is a union of values of !string and ?string meaning anything from !string or/and ?string belongs to string

@impinball Again, such a flag would change the meaning of existing code, which (from reading @RyanCavanaugh's comments) is not allowed. export var b = 5; will now export a !number where previously it exported a number.

Yes, in a sense. To be a little more technical, it accepts the union, but
it provides the intersection. Basically, either type can count as a
string, and a string can be passed for either type.

On Fri, Aug 28, 2015, 20:20 Aleksey Bykov [email protected] wrote:

@impinball https://github.com/impinball what you are trying to say by
altering the original meaning of words superset and subset can still be
articulated in terms of sets easily: the set of values of string is a
_union_ of values of !string and ?string meaning anything from !string
or/and ?string belongs to string


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-135920233
.

Obviously, it's not meant to be on by default. And most definition files
would be unaffected. Technically, anything change that isn't purely
additive (even adding a new argument to a function isn't in JavaScript) has
the capacity to break applications. It's similar in scope to
noImplicitAny in that it forces a little more explicit typing. And I
don't believe it could break much more than that, especially since the only
way this could affect other files is through exports from actual TypeScript
source files. (The other flag broke numerous definition files, and disabled
a frequent way of duck testing.)

On Fri, Aug 28, 2015, 20:21 Jesse Schalken [email protected] wrote:

@impinball https://github.com/impinball Again, such a flag would change
the meaning of existing code, which (from reading @RyanCavanaugh
https://github.com/RyanCavanaugh's comments) is not allowed. export var
b = 5; will now export a !number where previously it exported a number.


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-135920315
.

Obviously, it's not meant to be on by default.

It's still changing the meaning of the code. As soon as you do that, you can't read some TypeScript code and say "I know what this means", nor can you write a library which someone else will use, because the meaning of the code depends on what flags are used to compile it, and that is unacceptable. You've literally split the language in two.

That's why my proposal maintains existing behaviour for types lacking a nullability specifier (? or !).

@jesseschalken
If you want (and manage) to preserve with 100% fidelity the meaning of existing code, then you should give up on the idea of null safety on typescript. No solution is going to be usable in practice.

Having to put ? and ! on every type annotation is already verbose enough for me not wanting to do it, but you'd also have to give up on all type inference. let x = 3 infers number today. If you do not accept a change here, it means that you have to explicitly type everything to reap the benefits of null safety. Not something that I'm willing to do either.

When the benefits outweighs the drawbacks some concessions can be made. As has been pointed out by @impinball the TS team did exactly that with noImplicitAny. It's a flag that creates new errors in your code when you turn it on. As such, copy-pasting code from the internet, or even just using TS libraries can break if the code you take in was not written under the noImplicitAny assumption (and it happened to me).

I think the null safety can be introduced in a similar way. The meaning of code is the same and it runs with exactly identical semantics. But under a new flag (say noImplicitNull or whatever) you'd get additionnal checks and warnings/errors from code that is not written with the noImplicitNull assumption.

I think the null safety can be introduced in a similar way. The meaning of code is the same
and it runs with exactly identical semantics. But under a new flag (say noImplicitNull or whatever)
you'd get additionnal checks and warnings/errors from code that is not written with
the noImplicitNull assumption.

I like that approach and it seems a logical way to evolve the language. I hope that over time it would become the de-facto standard in the same way that typings are usually written with noImplicitAny in mind.

However, I think the important thing as far as gradual adoption is concerned is to make sure that existing code can be migrated a module at a time and that new code written with explicit nulls in mind can easily work with existing code which uses implicit nulls.

So how about this:

  • With -noImplicitNull, T becomes an alias for !T. Otherwise the nullability of T is unknown.
  • The flag should be overridable on a per-module basis by adding an annotation at the top, eg. @ts:implicit_null. This is similar to the way Flow enables type checking on a per-module basis.
  • Converting an existing codebase is done by first adding the -noImplicitNull compiler option, then annotating all existing modules with the '@ts:implicit_null' flag. This second step could be trivially automated.
  • When importing a module, there needs to be a policy about how to convert the types if the imported and importing modules have different implicitness settings.

There are different options for that last point if a module with explicit nulls imports one with implicit nulls.

  • An extreme approach would be to treat the nullability of T types as unknown and require the importer to explicitly cast the type to ?T or !T. That would require a lot of annotations in the callee but be safar.
  • Another approach would be to treat all imported T types as ?T. This would also require a lot of annotations in the caller.
  • Lastly, all imported T types could be treated as !T. This would of course be wrong in some cases but it might be the most pragmatic option. Similar to the way that a variable of type any can be assigned to a value of type T.

Thoughts?

@jods4 The noImplicitAny flag _does not_ change the meaning of existing code, it only requires the code to be explicit about something that would otherwise be implicit.

| Code | Flag | Meaning |
| --- | --- | --- |
| interface Foo { blah; } | | interface Foo { blah:any; } |
| interface Foo { blah; } | noImplicitAny | error, explicit type required |
| var foo = 'blah' | | var foo:string = 'blah' |
| var foo = 'blah' | noImplicitNull | var foo:!string = 'blah' |

With noImplicitNull, before you had a variable which null could be written to. Now you have a variable which null _cannot_ be written to. That's an entirely different beast to noImplicitAny.

@RyanCavanaugh has already ruled out flags which change the semantics of existing code. If you're going to flatly ignore the express requirements of the TS team then this ticket is going to hang around for another year.

@jesseschalken Sorry but I fail to see the difference.
Before noImplicitAny you may have this in your code:
let double = x => x*2;
It compiles and works fine. But once you turn on noImplicitAny, then the compiler throws an error at you saying that x is implicitly any. You have to modify your code to make it work with the new flag:
let double = (x: any) => x*2 or better yet let double = (x: number) => x*2.
Note that although the compiler raised an error, it would still emit perfectly working JS code (unless you turn off emit on errors).

The situation with nulls is pretty much the same in my opinion. Let's assume for discussion that with the new flag, T is non-nullable and T? or T | null denotes the union of type T and null.
Before you might have had:
let foo: string; foo = null; or even just let foo = "X"; foo = null which would be inferred to string just the same.
It compiles and works fine. Now turn on the new noImplicitNull flag. Suddenly TS throws an error indicating that you can't assign null to something which was not explicitely declared as such. But except for the typing error your code still emits _the same_, correct JS code.
With the flag you need to state your intention explicitely and modify the code:
string? foo; foo = null;

So what is the difference, really? Code is always emitting fine and its runtime behavior has not changed at all. In both cases you get errors from the typing system and you have to modify your code to be more explicit in your type declarations to get rid of them.

Also, in both cases, it is possible to take code written under the strict flag and compile it with the flag turned off and it still works the same and without error.

@robertknight Very close to my current thinking.

For modules / definitions that have not opted in the strict non-null types, T should basically mean: turn off all kind of null errors on this type. Trying to coerce it to T? can still create compatibility problems.

The problem is that today some T are actually non-nullable and some are nullable. Consider:

// In a strict module, function len does not accept nulls
function len(x: string): number { return x.length; }
// In a legacy module, some calls to len
let abc: string = "abc";
len(abc);

If you alias string to string? in the legacy module, then the call becomes an error because you pass a possibly null variable into a non-nullable parameter.

@jods4 Read my comment again. Look at the table. I don't know how to express it any more clearly.

Your example was explicitly crafted to arrive at your conclusion by putting the definition of foo and the assignment to foo next to each other. With noImplicitAny, the only errors that result are specifically from the code that needs to change (because it hasn't changed its meaning, it has only required it to be expressed more explicitly). With noImplicitNull, the code that caused the error was the _assignment_ to foo but the code that needed to change to fix it (to have the old meaning) was the _definition_ of foo. This is critically important, because the flag has _changed the meaning of the definition of foo_. The assignment and the definition can obviously be on different sides of a library boundary, for which the noImplicitNull flag has changed the _meaning_ of that library's public interface!

the flag changed the meaning of the definition of foo.

Yes, that is true. It changed from "I don't have the slightest idea whether the variable can hold null or not -- and I just don't care" to "This variable is null-free". There's a 50/50 chance that it was right and if it's not, you must precise your intent in the declaration. In the end, the result is just the same as with noImplicitAny: you must make your intent more explicit in the declaration.

The assignment and the definition can obviously be on different sides of a library boundary

Indeed, typically the declaration in the library and the use in my code. Now:

  1. If the library has opted in to strict nulls then it must declare its types correctly. If the library says it has strict null types and x is non nullable, then me trying to assign a null _is indeed an error that ought to be reported_.
  2. If the library has not opted in to strict nulls then the compiler should not raise any error for its usage.

This flag (just like noImplicitAny) can not be turned on without adjusting your code.

I see your point, but I would say that we do not _change_ the meaning code; rather we _express meaning that is not catured_ by the type system today.

Not only is this good because it will catch errors in today code, but I'd say that without taking such a step there will never be usable non-null types in TS.

Good news for non nullable types! It looks like the TS team are alright with introducing breaking changes in TS updates!
If you see this:
http://blogs.msdn.com/b/typescript/archive/2015/09/02/announcing-typescript-1-6-beta-react-jsx-better-error-checking-and-more.aspx
They introduce a breaking change (mutually exclusive optional syntax) with a new file type, and they introduce a breaking change _without_ the new file type (affects everyone).
That is a precedent we can argue non nullable types with (e. g. a .sts or strict Typescript extension and corresponding .sdts).

Now we just need to figure out if we want the compiler to attempt to check for undefined types or just null types (and what syntax) and we have a solid proposal.

@jbondc Very interesting read. Happy to see that my intuition about migration to NNBD (non-nullable by default) being easier than migration to optional non-nullable has been confirmed by studies (an order of magnitude less changes to migrate, and in the case of Dart, 1-2 annotations per 1000 lines of code needed nullity changes, not more than 10 even in null-heavy code)

I'm not sure if the complexity of the document really reflects the complexity of non-nullable types. For example, in the generics section they discuss 3 kinds of formal type parameters, then show that you don't actually need those. In TS, null would simply be the type of the completely empty record (no properties, no methods) while {} would be the root non-null type, and then non-nullable generics are simply G<T extends {}> - no need to discuss multiple kinds of formal type parameters at all.

Additionally it seems that they propose a lot of non-essential sugar, like var !x

The survey of existing languages that have dealt with the same problem is the real gem though.

Reading the document I realized that Optional / Maybe types are more powerful than nullable types, especially in a generic context - mostly because of the ability to encode Just(Nothing). For example, if we have a generic Map interface that contains values of type T and supports get which may or may not return a value depending on the presence of a key:

interface Map<T> {
  get(s:string):Maybe<T>
}

there is nothing preventing T from being of type Maybe<U>; the code will work perfectly well and return Just(Nothing) if a key is present but contains a Nothing value, and will simply return Nothing if the key is not present at all.

In contrast, if we use nullable types

interface Map<T> {
  get(s:string):T?
}

then its impossible to distinguish between a missing key and a null value when T is nullable.

Either way, the ability to differentiate nullable from non-nullable values and model the available methods and properties accordingly is a prerequisite for any kind of type safety.

@jbondc This is a very intersting find. They obviously did a lot of work and study on this.

I find comforting that studies show that 80% of declarations are actually meant non-null or that there's only 20 nullity annotations per KLOC (p. 21). As noted in the document this is a strong argument for non-null by default, which was also my feeling.

Another argument in favor of non-null is that it creates a cleaner type system: null is its own type, T is non-null and T? is a synonym for T | null. Because TS already has union type all is nice, clean and works well.

Seeing the list of recent languages that tackled that problem, I really think that a new modern programming language should handle this long-standing issue and reduce null bugs in code bases. This is a far too common problem for something that ought to be modelled in the type system. I still hope TS will get it some day.

I found the idea of operator T! intriguing and possibly useful. I was thinking of a system where T is a non-null type, T? is T | null. But it bothered me that you couldn't really create a generic API that guarantees a non-null result even in the face of a null input. I don't have good use-cases, but in theory I couldn't model this faithfully: function defined(x) { return x || false; }.
Using the non-null reversal operator, one could do: function defined<T>(x: T): T! | boolean. Meaning that if defined returns a T it is guaranteed to be non-null, even if the generic T constraint was nullable, say string?. And I don't think it's hard to model in TS: given T!, if T is a union type that includes null type, return the type resulting from removing null from the union.

@spion

Reading the document I realized that Optional / Maybe types are more powerful than nullable types

You can nest Maybe structures, you cannot nest null, indeed.

This is an interesting discussion in the context of defining new apis, but when mapping existing apis there's little choice. Making the language map Nulls to Maybe will not take advantage of that benefit, unless the function is rewritten entirely.

Maybe encodes two distinct pieces of information: whether there is a value and what the value is. Taking your Map example and looking at C# this is obvious, from Dictionary<T,K>:
bool TryGet(K key, out T value).
Notice that if C# had tuples (maybe C# 7), this is basically the same as:
(bool hasKey, T value) TryGet(K key)
Which is basically a Maybe and allows storing null.

Note that JS has its own way of dealing with this issue and it creates a whole lot of new interesting problems: undefined. A typical JS Map would return undefined if the key is not found, its value otherwise, including null.

Related proposal for C# 7 - https://github.com/dotnet/roslyn/issues/5032

You guys do realise that the problem isn't solved unless you model undefined in the same manner?
Otherwise all your null problems will just be replaced with undefined problems (which imo are more prevalent anyway).

@Griffork

all your null problems will just be replaced with undefined problems

No, why would they?
My null problems will go away and my undefined problems will remain.

True, undefined is still an issue. But that depends a lot on your coding style. I code with almost no undefined except where the browser forces them on me, which means 90% of my code would be safer with null checks.

I code with almost no undefined except where the browser forces them on me

I would have thought that JavaScript forces undefined on one at every turn.

  • Uninitialised variables. let x; alert(x);
  • Omitted function arguments. let foo = (a?) => alert(a); foo();
  • Accessing non-existent array elements. let x = []; alert(x[0]);
  • Accessing non-existent object properties. let x = {}; alert(x['foo']);

Null, on the other hand, occurs in fewer and more predictable situations:

  • DOM access. alert(document.getElementById('nonExistent'));
  • Third-party web service responses (since JSON.stringify strips undefined) . { name: "Joe", address: null }
  • Regex.exec

For this reason, we prohibit the use of null, convert all null received over the wire to undefined, and always use strict equality checking for undefined. This has worked well for us in practice.

Consequently I do agree that the undefined problem is the more prevalent one.

@NoelAbrahams Coding style, I tell you :)

Uninitialised variables

I always initialize variables and I have noImplicitAny turned on, so let x would be an error anyway. The closer I would use in my project is let x: any = null, although that's code I wouldn't write often.

Optional function parameters

I use default parameter values for optional parameters, it seems to me that makes more sense (your code _will_ read and use the parameter somehow, doesn't it?). So for me: function f(name?: string = null, options?: any = {}).
Accessing the raw undefined parameter value would be an _exceptional_ case for me.

Accessing non-existent array elements

This is something that I strive not to do in my code. I check my arrays length to not go out of bounds and I don't use sparse arrays (or try to fill empty slots with default values such as null, 0, ...).
Again, you may come up with a special case where I would do that, but that would be an _exception_, not the rule.

Accessing non-existent object properties.

Pretty much the same thing as for arrays. My objects are typed, if a value is not available I set it to null, not undefined. Again you may find edge cases (like doing a dictionary probe) but they are _edge cases_ and not representative of my coding style.

In all _exceptional_ cases where I get an undefined back, I immediately take action on the undefined result and do not propagate it further or "work" with it. Typical real-world example in a fictional TS compiler with null checks:

let cats: Cat[];
// Note that find returns undefined if there's no cat named Kitty
let myFavoriteCat = cats.find(c => c.name === 'Kitty'); 
if (myFavoriteCat === undefined) {
  // Immediately do something to compensate here:
  // return false; or 
  // myFavoriteCat = new Cat('Kitty'); or
  // whatever makes sense.
}
// Continue with assurance that myFavoriteCat is not null (it was an array of non-nullable cats after all).

For this reason, we prohibit the use of null , convert all null received over the wire to undefined , and always use strict equality checking for undefined . This has worked well for us in practice.

From this I understand that you use a very different coding style than I do. If you basically use undefined everywhere then yes, you will benefit from statically checked null types a lot less than I
would.

Yes, the thing is not 100% watertight because of undefined and I don't believe one can create a reasonably usable language that is 100% correct in this respect. JS introduces undefined in too many places.

But as I hope you can tell from my answers above, with appropriate coding style there is _a lot_ to benefit from null checks. At least my opinion is that in my code base it would help find and prevent many stupid bugs and be a productivity enhancer in my team environment.

@jods4, it was interesting to read your approach.

I think the objection that I have to that approach is there appear to be a lot of rules that need to be adhered to - it's rather like communism vs. the free market :smile:

The TS team internally have a rule similar to ours for their own style.

@NoelAbrahams "Use undefined" is as much a rule as "Use null".

In any case, consistency is key and I wouldn't like a project where I am not sure if things are supposed to be null or undefined (or an empty string or zero). Especially since TS currently does not help with this issue...

I know TS has a rule to favor undefined over null, I am curious if this is an arbitrary "for the sake of consistency" choice or if there are more arguments behind the choice.

Why I like to use null rather than undefined:

  1. It works in a familiar way for our devs, many come from static OO languages such as C#.
  2. Uninitialized variables are usually regarded as a code smell in many languages, not sure why it should be different in JS. Make your intention clear.
  3. Although JS is a dynamic language, performance is better with static types that do not change. It is more efficient to set a property to null than delete it.
  4. It supports the clean difference between null which is defined but signifies the absence of value, and undefined, which is... undefined. A place where the difference between the two is obvious: optional parameters. Not passing a parameter results in undefined. How do you _pass_ the empty value if you use undefined for that in your code base? Using null there's no problem here.
  5. It lays off a clean path to null checking, as discussed in this thread, which is not really practical with undefined. Although maybe I'm day-dreaming on this one.
  6. You have to make a choice for consistency, IMHO null is as good as undefined.

I think the reason for preferring undefined to null is because of
default arguments and consistency with obj.nonexistentProp returning
undefined.

Other than that, I don't get the bikeshedding over what counts as null
enough to require the variable to be nullable.

On Tue, Sep 8, 2015, 06:48 jods [email protected] wrote:

@NoelAbrahams https://github.com/NoelAbrahams "Use undefined" is as
much a rule as "Use null".

In any case, consistency is key and I wouldn't like a project where I am
not sure if things are supposed to be null or undefined (or an empty
string or zero). Especially since TS currently does not help with this
issue...

I know TS has a rule to favor undefined over null, I am curious if this
is an arbitrary "for the sake of consistency" choice or if there are more
arguments behind the choice.

Why _I_ like to use null rather than undefined:

  1. It works in a familiar way for our devs, many come from static OO
    languages such as C#.
  2. Uninitialized variables are usually regarded as a code smell in
    many languages, not sure why it should be different in JS. Make your
    intention clear.
  3. Although JS is a dynamic language, performance is better with
    static types that do not change. It is more efficient to set a property to
    null than delete it.
  4. It supports the clean difference between null which is defined but
    signifies the absence of value, and undefined, which is... undefined.
    A place where the difference between the two is obvious: optional
    parameters. Not passing a parameter results in undefined. How do you
    _pass_ the empty value if you use undefined for that in your code
    base? Using null there's no problem here.
  5. It lays off a clean path to null checking, as discussed in this
    thread, which is not really practical with undefined. Although maybe
    I'm day-dreaming on this one.
  6. You have to make a choice for consistency, IMHO null is as good as
    undefined.


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-138514437
.

@impinball
Can we stop using "bikeshedding" in every github discussion? We can safely say that using undefined or null is a team preference; but the issue whether to try to include undefined in the null checks (or not) and how it would work is not trivial. So I don't get how that's bikeshedding in the first place?

I have built a fork of TS 1.5 with non-nullable types and it was surprisingly easy. But I think that there are two difficult issues that need a consensus to have non-nullable types in the official TS compiler, both have been discussed at length above without a clear conclusion:

  1. What do we do with undefined? (my opinion: it's still everywhere and unchecked)
  2. How do we handle compatibility with existing code, in particular definitions? (my opinion: opt-in flag, at least per definition file. Turning the flag on is "breaking" because you may have to add null annotations.)

My personal opinion is that null and undefined should be treated
equivalently for purposes of nullability. Both are used for that use case,
representing the absence of a value. One is that the value never existed,
and the other is that the value once existed and no longer does. Both
should count for nullability. Most JS functions return undefined, but many
DOM and library functions return null. Both serve the same use case. Thus,
they should be treated equivalently.

The bikeshedding reference was about the code style arguments over which
should be used to represent the absence of a value. Some are arguing for
just null, some are arguing for just undefined, and some are arguing for a
mixture. This proposal shouldn't limit itself to just one of those.

On Tue, Sep 8, 2015, 13:28 jods [email protected] wrote:

@impinball https://github.com/impinball
Can we stop using "bikeshedding" in every github discussion? We can safely
say that using undefined or null is a team preference; but the issue
whether to try to include undefined in the null checks (or not) and how
it would work is not trivial. So I don't get how that's bikeshedding in the
first place?

I have built a fork of TS 1.5 with non-nullable types and it was
surprisingly easy. But I think that there are two difficult issues that
need a consensus to have non-nullable types in the official TS compiler,
both have been discussed at length above without a clear conclusion:

  1. What do we do with undefined? (my opinion: it's still everywhere
    and unchecked)
  2. How do we handle compatibility with existing code, in particular
    definitions? (my opinion: opt-in flag, at least per definition file.
    Turning the flag on is "breaking" because you may have to add null
    annotations.)


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-138641395
.

@jods4 I take it from your point 2 that your fork also infers the non-nullable type by default instead of requiring an explicit non-nullable type annotation?

Can you link it please? I would like to try it out.

@impinball
I would love to see (some) safety against undefined as well, but it is quite pervasive.

In particular, can we define an array of non-nullable types?
Given that an out-of-bounds (or sparse) array access returns undefined, can we conveniently define and use arrays?
I think that requiring all arrays to be nullable is too much of a burden in practice.

Should we differentiate null and undefined types? That's not difficult: T | null, T | undefined, T | null | undefined and may provide an easy answer to the question above. But then what about shorthand syntax: what does T? stand for? Both null and undefined? Do we need two different shorthands?

@Arnavion
Null and Undefined types already exist in TS.
My take was to:

  1. Make all types non-nullable (including inferred types);
  2. Give the null type a name (null) that you can use in type declarations;
  3. Remove the widening from null to any;
  4. Introduce syntax shorthand T? which is the same as T | null;
  5. Remove implicit conversions from null to any other type.

Without access to my sources I think that's the gist of it. Existing Null type and the wonderful TS type system (especially union types and type guards) do the rest.

I have not committed my work on github yet, so I can't share a link for now. I wanted to code the compatibility switch first but I have been very busy with other things :(
The compatibility switch is a lot more involved <_<. But it's important because right now the TS compiler compiles itself with a lot of errors and lots of existing tests fail.
But it seems to actually work nicely on brand new code.

Let me summarise what I've seen from some commenters so far in the hope that I can illustrate how this conversation is going around in circles:
Problem: Certain 'special case' values are finding their was into code that's not designed to deal with them because they're treated differently to every other type in the language (i.e. null and undefined).
Me: why don't you just lay out programming standards so those problems don't happen?
Other: because it would be nice if the intention could be reflected in the typing, because then it won't be documented differently by every team working in type script, and less false assumptions about 3rd party libraries will happen.
Everone: How are we going to deal with this problem?
Other: let's just make nulls more strict and use standards to deal with undefined!

I don't see how this can possibly be considered a solution when undefined is far more a problem than null is!

Disclaimer: this does not reflect the attitudes of everyone here, it's just that it's come up enough that I wanted to highlight this.
The only way this will be accepted it's if it's a solution to a problem, not a little bit of a solution to the smaller part of a problem.

Dang phone!
*everyone

*it's come up enough that I wanted to highlight it.

Also the last paragraph should read "the only way this proposal"

@jods4 I would say that it would depend on certain constructs. In some cases, you can guarantee non-nullability in those cases, such as the following:

declare const list: T![];

for (const entry of list) {
  // `entry` is clearly immutable here.
}

list.forEach(entry => {
  // `entry` is clearly immutable here.
})

list.map(entry => {
  // `entry` is clearly immutable here.
})

In this case, the compiler would have to have a ton of logic to ensure that the array is checked in bounds:

declare const list: T![]

for (let i = 0; i < list.length; i++) {
  // This could potentially fail if the compiler doesn't correctly do the static bounds check.
  const entry: T![] = list[i];
}

And in this case, there is no way you could guarantee that the compiler could verify the access to get entry is in bounds without actually evaluating parts of the code:

declare const list: T![]

const end = round(max, list.length);

for (let i = 0; i < end; i++) {
  const entry: T![] = list[i];
}

There are some easy, obvious ones, but there are some harder ones.

@impinball Indeed, modern API such as map, forEach or for..of are ok because they skip elements that were never initialized or deleted. (They do include elements that have been set to undefined, but our hypothetical null-safe TS would forbid that.)

But classical array access is an important scenario and I would like to see a good solution for that. Doing complex analysis as you suggested is clearly not possible except in trivial cases (yet important cases since they are common). But note that even if you could prove that i < array.length it doesn't prove that the element is initialized.

Consider the following example, what do you think TS should do?

let array: T![] = [];  // an empty array of non-null, non-undefined T
// blah blah
array[4] = new T();  // Fine for array[4], but it means array[0..3] are undefined, is that OK?
// blah blah
let i = 2;
// Note that we could have an array bounds guard
if (i < array.length) {
  let t = array[i];  // Inferred type would be T!, but this is actually undefined :(
}

There's also the problem with Object.defineProperty.

let array = new Array(5)
Object.defineProperty(array, "length", 2, {
  get() { return 10 },
})

On Wed, Sep 9, 2015, 17:49 jods [email protected] wrote:

@impinball https://github.com/impinball Indeed, modern API such as map,
forEach or for..of are ok because they skip elements that were never
initialized or deleted. (They do include elements that have been set to
undefined, but our hypothetical null-safe TS would forbid that.)

But classical array access is an important scenario and I would like to
see a good solution for that. Doing complex analysis as you suggested is
clearly not possible except in trivial cases (yet important cases since
they are common). But note that even if you could prove that i <
array.length it doesn't prove that the element is initialized.

Consider the following example, what do you think TS should do?

let array: T![] = []; // an empty array of non-null, non-undefined T// blah blah
array[4] = new T(); // Fine for array[4], but it means array[0..3] are undefined, is that OK?// blah blahlet i = 2;// Note that we could have an array bounds guardif (i < array.length) {
let t = array[i]; // Inferred type would be T!, but this is actually undefined :(
}


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-139055786
.

All you need to do is to:

  1. Make the Null and Undefined types referenceable.
  2. Prevent null and undefined values to be assignable to anything.
  3. Use a union type with Null and/or Undefined where nullability us implied.

It indeed is a bold breaking change and looks more suitable for a language
extension.
On Sep 10, 2015 9:13 AM, "Isiah Meadows" [email protected] wrote:

There's also the problem with Object.defineProperty.

let array = new Array(5)
Object.defineProperty(array, "length", 2, {
get() { return 10 },
})

On Wed, Sep 9, 2015, 17:49 jods [email protected] wrote:

@impinball https://github.com/impinball Indeed, modern API such as
map,
forEach or for..of are ok because they skip elements that were never
initialized or deleted. (They do include elements that have been set to
undefined, but our hypothetical null-safe TS would forbid that.)

But classical array access is an important scenario and I would like to
see a good solution for that. Doing complex analysis as you suggested is
clearly not possible except in trivial cases (yet important cases since
they are common). But note that even if you could prove that i <
array.length it doesn't prove that the element is initialized.

Consider the following example, what do you think TS should do?

let array: T![] = []; // an empty array of non-null, non-undefined T//
blah blah
array[4] = new T(); // Fine for array[4], but it means array[0..3] are
undefined, is that OK?// blah blahlet i = 2;// Note that we could have an
array bounds guardif (i < array.length) {
let t = array[i]; // Inferred type would be T!, but this is actually
undefined :(
}


Reply to this email directly or view it on GitHub
<
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-139055786>
.


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-139230568
.

@impinball
I feel OK regarding your example. Using defineProperty in this way is stepping outside of the TS safety box and into the dynamic JS realm, don't you think? I don't think I ever called defineProperty directly in TS code.

@aleksey-bykov

looks more suitable for a language extension.

Another issue with this proposal is that unless it gets widely accepted it has lessen value.
Eventually, we need updated TS definitions and I don't think it will happen if it's a seldom used, incompatible, extension or fork of TS.

I do know that typed arrays do have a guarantee, because IIRC they throw a
ReferenceError on out of bounds array load and stores. Regular arrays and
arguments objects return undefined when the index is out of bounds. In my
opinion, that's a JS language flaw, but fixing it would most definitely
break the Web.

The only way to "fix" this is in ES6, via a constructor returning a proxy,
and its instance prototype and self prototype set to the original Array
constructor. Something like this:

Array = (function (A) {
  "use strict";
  function check(target, prop) {
    const i = +prop;
    if (prop != i) return target[prop];
    if (i >= target.length) {
      throw new ReferenceError();
    }
    return i;
  }

  function Array(...args) {
    return new Proxy(new Array(...args), {
      get(target, prop) {
        return target[check(target, prop)];
      },
      set(target, prop, value) {
        return target[check(target, prop)] = value;
      },
    });
  }

  Array.prototype = A.prototype;
  Array.prototype.constructor = Array
  Object.setPrototypeOf(Array, A);
  return Array;
})(Array);

(note: it's untested, typed on a phone...)

On Thu, Sep 10, 2015, 10:09 jods [email protected] wrote:

@impinball https://github.com/impinball
I feel OK regarding your example. Using defineProperty in this way is
stepping outside of the TS safety box and into the dynamic JS realm, don't
you think? I don't think I ever called defineProperty directly in TS code.

@aleksey-bykov https://github.com/aleksey-bykov

looks more suitable for a language extension.

Another issue with this proposal is that unless it gets widely accepted it
has lessen value.
Eventually, we need updated TS definitions and I don't think it will
happen if it's a seldom used, incompatible, extension or fork of TS.


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-139245706
.

@impinball Not sure that there is something to "fix" in the first place...
Those are the semantics of JS and TS must accomodate them somehow.

What if we just have a (slightly different) mark up for declaring a sparse array vs a non-sparse array, and make the non-sparse one automatically initialise (maybe the programmers can supply a value or equation for the initialisation). That way we can force sparse arrays to be type T|undefined (which would change to type T using for... of and other 'safe' operations) and leave non-sparse array's types alone.

//not-sparse
var array = [arrlength] => index*3;
var array = <number[]>[3];
//sparse
var array = [];

Obviously this is not the final syntax.
The second example would initialise every value to the compiler-default for that type.
This does mean that for non-sparse arrays you have to type them, otherwise I suspect you'd have to cast them to something after they were fully initialised.
Also we should need a non-sparse type for arrays so programmers can cast their arrays to non-sparse.

@Griffork
I don't know... it gets confusing.

That way we can force sparse arrays to be type T|undefined (which would change to type T using for... of and other 'safe' operations)

Because of a quirk in JS it doesn't work that way. Assume let arr: (T|undefined)[].
So I am free to do: arr[0] = undefined.
If I do that then using those "safe" functions _will_ return undefined for the first slot. So in arr.forEach(x => ...) you cannot say that x: T. It still has to be x: T|undefined.

The second example would initialise every value to the compiler-default for that type.

This is not very TS-like in spirit. Maybe I'm wrong but it seems to me that TS philosophy is that types are only an additional layer on top of JS and they don't impact codegen. This has perf implications for the sake of partial type-correctness that I don't quite like.

TS can obviously not protect you from everything in JS and there are several functions / constructs that you may call from valid TS, which have dynamic effects on the runtime type of your objects and break static TS type analysis.

Would it be bad if this was a hole in the type system? I mean, this code is really not common: let x: number[] = []; x[3] = 0; and if that's the kind of things you want to do then maybe you should declare your array let x: number?[].

It's not perfect but I think it's good enough for most real-world usage. If you're a purist who wants a sound type system, then you should certainly look at another language, because TS type system is not sound anyway. What do you think?

That's why I said you also need the ability to cast to a non-sparse array
type, so you can initialise an array yourself without the performance
impact.
I'd settle for just a differentiation between (what are meant to be) sparse
arrays and non-sparse arrays by type.

For those who don't know why this is important, it's the same reason you
would want a difference between T and T|null.

On 9:11AM, Fri, Sep 11, 2015 jods [email protected] wrote:

@Griffork https://github.com/Griffork
I don't know... it gets confusing.

That way we can force sparse arrays to be type T|undefined (which would
change to type T using for... of and other 'safe' operations)

Because of a quirk in JS it doesn't work that way. Assume let arr:
(T|undefined)[].
So I am free to do: arr[0] = undefined.
If I do that then using those "safe" functions _will_ return undefined
for the first slot. So in arr.forEach(x => ...) you cannot say that x: T.
It still has to be x: T|undefined.

The second example would initialise every value to the compiler-default
for that type.

This is not very TS-like in spirit. Maybe I'm wrong but it seems to me
that TS philosophy is that types are only an additional layer on top of JS
and they don't impact codegen. This has perf implications for the sake of
partial type-correctness that I don't quite like.

TS can obviously not protect you from everything in JS and there are
several functions / constructs that you may call from valid TS, which have
dynamic effects on the runtime type of your objects and break static TS
type analysis.

Would it be bad if this was a hole in the type system? I mean, this code
is really not common: let x: number[] = []; x[3] = 0; and if that's the
kind of things you want to do then maybe you should declare your array let
x: number?[].

It's not perfect but I think it's good enough for most real-world usage.
If you're a purist who wants a sound type system, then you should certainly
look at another language, because TS type system is not sound anyway. What
do you think?


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-139408240
.

@jods4 What I meant by "fixing" was "fixing" what's IMHO a JS language design flaw. Not TypeScript, but in _JavaScript itself_.

@jods I'm complaining about JS, not TS. I'll admit it's off topic.

On Thu, Sep 10, 2015, 19:19 Griffork [email protected] wrote:

That's why I said you also need the ability to cast to a non-sparse array
type, so you can initialise an array yourself without the performance
impact.
I'd settle for just a differentiation between (what are meant to be) sparse
arrays and non-sparse arrays by type.

For those who don't know why this is important, it's the same reason you
would want a difference between T and T|null.

On 9:11AM, Fri, Sep 11, 2015 jods [email protected] wrote:

@Griffork https://github.com/Griffork
I don't know... it gets confusing.

That way we can force sparse arrays to be type T|undefined (which would
change to type T using for... of and other 'safe' operations)

Because of a quirk in JS it doesn't work that way. Assume let arr:
(T|undefined)[].
So I am free to do: arr[0] = undefined.
If I do that then using those "safe" functions _will_ return undefined
for the first slot. So in arr.forEach(x => ...) you cannot say that x: T.
It still has to be x: T|undefined.

The second example would initialise every value to the compiler-default
for that type.

This is not very TS-like in spirit. Maybe I'm wrong but it seems to me
that TS philosophy is that types are only an additional layer on top of
JS
and they don't impact codegen. This has perf implications for the sake of
partial type-correctness that I don't quite like.

TS can obviously not protect you from everything in JS and there are
several functions / constructs that you may call from valid TS, which
have
dynamic effects on the runtime type of your objects and break static TS
type analysis.

Would it be bad if this was a hole in the type system? I mean, this code
is really not common: let x: number[] = []; x[3] = 0; and if that's the
kind of things you want to do then maybe you should declare your array
let
x: number?[].

It's not perfect but I think it's good enough for most real-world usage.
If you're a purist who wants a sound type system, then you should
certainly
look at another language, because TS type system is not sound anyway.
What
do you think?


Reply to this email directly or view it on GitHub
<
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-139408240>
.


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-139409349
.

And as for my statement with Array lengths, we could operate on the assumption that all array accesses are in bounds, and that out-of-bounds access is undefined unless explicitly specified in the interface. That's very much like what C/C++ does, and it would allow both better typing, and potentially a whole load of compiler optimizations, if someone decides to eventually write a third party compiler that uses the language spec, but isn't as concerned about matching the emit.

I know supporting matching C/C++ undefined behavior sounds very stupid on the surface, but I think that in this case, it could be worth it. It's rare to see something that is actually done _better_ by making an out of bounds access. 99.99% of uses I've seen for that are just extremely pungent code smells, almost always done by people who have almost no familiarity with JavaScript.

(Most of these people, in my experience, haven't even heard of CoffeeScript, much less TypeScript. Many of them are even unaware of the new version of JS that was just finalized and standardized, ES2015.)

Is there an emerging resolution to this?

Short of having a non-nullable type, it still seems useful for TypeScript to fail if one is trying to access a property on a variable that is _assuredly_ null.

var o = null;
console.log(o.x);

... should fail.

Ensuring through the type system that all array access is bounds-checked seems like drifting into the realm of dependent types. While dependent types are pretty neat, that seems like a much bigger feature than non-nullable types.

It seems like there are three options assuming that bounds-checking isn't enforced on arrays at compile time:

  1. Array indexing (and any arbitrary array element access by index) is considered to return a nullable type, even on arrays of non-nullable types. Essentially, the [] "method" has a type signature of T?. If you know that you're only doing bounds-checked indexing, you can cast the T? to a T! in your application code.
  2. Array indexing returns exactly the same type (with the same nullability) as the array's generic type parameter, and it's assumed that all array access is bounds-checked by the application. Out-of-bounds access will return undefined, and won't be caught by the type-checker.
  3. The nuclear option: all arrays are hardcoded into the language as being nullable, and attempts to use non-nullable arrays fail typechecks.

These all apply to index-based access of properties on objects, too, e.g. object['property'] where object is of type { [ key: string ]: T! }.

Personally I prefer the first option, where indexing into an array or object returns a nullable type. But even the second option seems better than everything being nullable, which is the current state of affairs. The nuclear option is gross but honestly also still better than everything being nullable.

There's a second question of whether types should be by default non-nullable, or by default nullable. It seems like in either case it would be useful to have syntax both for explicitly nullable and explicitly non-nullable types in order to handle generics; e.g. imagine a get method on a container class (e.g. a Map) that took some value and possibly returned a type, _even if the container only contained non-nullable values_:

class Container<K,V> {
  get(key: K): V? {
    // fetch from some internal data structure and return the value, if it exists
    // return null otherwise
  }
}

// only non-nullable values allowed in the container
const container = new Container<SomeKeyClass!, SomeValueClass!>();
const val: SomeValueClass!;
// ... later, we attempt to read from the container with a get() call
// even though only non-nullables are allowed in the container, the following should fail:
// get() explicitly returns null when the item can't be found
val = container.get(someKey);

Similarly, we might (this is less of a strong argument) want to ensure our container class only accepted non-null keys on inserts, even when using a nullable key type:

class Container<K, V> {
  insert(key: K!, val: V): void {
    // put the val in the data structure
    // the key must not be null here, even if K is elsewhere a nullable type
  }
}

const container = new Container<SomeKeyClass?, SomeValueClass>();
container.insert(null, new SomeValueClass()); // fails

So regardless of whether the default changes, it seems like it would be useful to have explicit syntax for both nullable types and non-nullable types. Unless I'm missing something?

At the point where there's syntax for both, the default seems like it could be a compiler flag similar to --noImplicitAny. Personally I'd vote for the default staying the same until a 2.0 release, but either seems fine as long as there's an escape hatch (at least temporarily).

I would prefer the second option, as even though it would make out-of-bounds access undefined behavior (in the realm of TS typing), I think that's a good compromise for this. It can greatly increase speed, and it's simpler to deal with. If you are really expecting an out-of-bounds access is possible, you should either use a nullable type explicitly, or cast the result to a nullable type (which is always possible). And generally, if the array is non-nullable, any out-of-bounds access is almost always a bug, one that should erupt violently at some point (It's a JS flaw).

It pretty much requires the programmer to be explicit in what they're expecting. It's less type-safe, but in this case, I think type safety may end up getting in the way.

Here's a comparison with each option, using a summation function as an example (primitives are the most problematic):

// Option 1
function sum(numbers: !number[]) {
  let res = 0
  for (let i = 0; i < numbers.length; i++) {
    res += <!number> numbers[i]
  }
  return res
}

// Option 2
function sum(numbers: !number[]) {
  let res = 0
  for (let i = 0; i < numbers.length; i++) {
    res += numbers[i]
  }
  return res
}

// Option 3
function sum(numbers: number[]) {
  let res = 0
  for (let i = 0; i < numbers.length; i++) {
    res += <!number> numbers[i]
  }
  return res
}

Another example: a map function.

// Option 1
function map<T>(list: !T[], f: (value: !T, index: !number) => !T): !T[] {
  let res: !T[] = []
  for (let i = 0; i < list.length; i++) {
    res.push(f(<!T> list[i], i));
  }
  return res
}

// Option 2
function map<T>(list: !T[], f: (value: !T, index: !number) => !T): !T[] {
  let res: !T[] = []
  for (let i = 0; i < list.length; i++) {
    res.push(f(list[i], i));
  }
  return res
}

// Option 3
function map<T>(list: T[], f: (value: !T, index: !number) => !T): T[] {
  let res: T[] = []
  for (let i = 0; i < list.length; i++) {
    const entry = list[i]
    if (entry !== undefined) {
      res.push(f(<!T> entry, i));
    }
  }
  return res
}

Another question: what is the type of entry in each of these? !string, ?string, or string?

declare const regularStrings: string[];
declare const nullableStrings: ?string[];
declare const nonnullableStrings: !string[];

for (const entry of regularStrings) { /* ... */  }
for (const entry of nullableStrings) { /* ... */  }
for (const entry of nonnullableStrings) { /* ... */  }

Option three was a bit of a tongue-in-cheek suggestion :stuck_out_tongue:

Re: your last question:

declare const regularStrings: string[];
declare const nullableStrings: string?[];
declare const nonNullableStrings: string![]; // fails typecheck in option three

for(const entry of regularStrings) {
  // option 1: entry is of type string?
  // option 2: depends on default nullability
}

for(const entry of nullableStrings) {
  // option 1 and 2: entry is of type string?
}

for(const entry of nonNullableStrings) {
  // option 1: entry is of type string?
  // option 2: entry is of type string!
}

In some cases — where you want to return a non-nullable type and you're getting it from an array, for example — you'll have to do an extra cast with option one assuming you've elsewhere guaranteed there are no undefined values in the array (the requirement of this guarantee doesn't change regardless of which approach is taken, only the need to type as string!). Personally I still prefer it because it's both more explicit (you have to specify when you're taking possibly-dangerous behavior, as opposed to it happening implicitly) and more consistent to how most container classes work: for example, a Map's get function clearly returns nullable types (it returns the object if it exists under the key, or null if it doesn't), and if Map.prototype.get returns a nullable then object['property'] should probably do the same, since they make similar guarantees about nullability and are used similarly. Which leaves arrays as the odd ones out where null reference errors can creep back in, and where random access is allowed to be non-nullable by the type system.

There are definitely other approaches; for example, currently Flow uses option two, and last I checked SoundScript made sparse arrays explicitly illegal in their spec (well, strong mode/"SaneScript" makes them illegal, and SoundScript is a superset of the new rules), which to some extent sidesteps the problem although they'll still need to figure out how to deal with manual length changes and with initial allocation. I suspect they'll come closer to option one in the convenience vs. safety tradeoff — that is, it'll be less convenient to write but more safe — due to their emphasis on type system soundness, but it'll probably look somewhat different than either of these approaches due to the ban on sparse arrays.

I think the performance aspect is extremely theoretical at this point, since AFAIK TypeScript will continue emitting the same JS regardless of casts for any choice here and the underlying VMs will continue bounds-checking arrays under the hood regardless. So I'm not too swayed by that argument. The question to my mind is mostly around convenience vs. safety; to me, the safety win here seems worth the convenience tradeoff. Of course, either is an improvement over having all types be nullable.

I agree that the performance part is mostly theoretical, but I still would
like the convenience of assuming. Most arrays are dense, and nullability by
default doesn't make sense for boolean and numeric arrays. If it's not
intended to be a dense array, it should be marked as explicitly nullable so
the intent is clear.


TypeScript really needs a way of asserting things, since asserts are often
used to assist in static type checking in other languages. I've seen in the
V8 code base an UNREACHABLE(); macro that allows for assumptions to be a
little more safely, crashing the program if the invariant is violated. C++
has static_assert for static assertions to aid in type checking.

On Tue, Oct 20, 2015 at 4:01 AM, Matt Baker [email protected]
wrote:

Option three was a bit of a tongue-in-cheek suggestion [image:
:stuck_out_tongue:]

Re: your last question:

declare const regularStrings: string[];declare const nullableStrings: string?[];declare const nonNullableStrings: string![]; // fails typecheck in option three
for(const entry of regularStrings) {
// option 1: entry is of type string?
// option 2: depends on default nullability
}
for(const entry of nullableStrings) {
// option 1 and 2: entry is of type string?
}
for(const entry of nonNullableStrings) {
// option 1: entry is of type string?
// option 2: entry is of type string!
}

In some cases — where you want to return a non-nullable type and you're
getting it from an array, for example — you'll have to do an extra cast
with option one assuming you've elsewhere guaranteed there are no undefined
values in the array (the requirement of this guarantee doesn't change
regardless of which approach is taken, only the need to type as string!).
Personally I still prefer it because it's both more explicit (you have to
specify when you're taking possibly-dangerous behavior, as opposed to it
happening implicitly) and more consistent to how most container classes
work: for example, a Map's get function clearly returns nullable types
(it returns the object if it exists under the key, or null if it doesn't),
and if Map.prototype.get returns a nullable then object['property']
should probably do the same, since they make similar guarantees about
nullability and are used similarly. Which leaves arrays as the odd ones out
where null reference errors can creep back in, and where random access is
allowed to be non-nullable by the type system.

There are definitely other approaches; for example, currently Flow uses
option two http://flowtype.org/docs/nullable-types.html, and last I
checked SoundScript made sparse arrays explicitly illegal in their spec
https://github.com/rwaldron/tc39-notes/blob/master/es6/2015-01/JSExperimentalDirections.pdf
(well, strong mode/"SaneScript" makes them illegal, and SoundScript is a
superset of the new rules), which to some extent sidesteps the problem
although they'll still need to figure out how to deal with manual length
changes and with initial allocation.

I think the performance aspect is extremely theoretical at this point,
since AFAIK TypeScript will continue emitting the same JS regardless of
casts for any choice here and the underlying VMs will continue
bounds-checking arrays under the hood regardless. So I'm not too swayed by
that argument. The question to my mind is mostly around convenience vs.
safety; to me, the safety win here seems worth the convenience tradeoff. Of
course, either is an improvement over having all types be nullable.


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-149468527
.

Isiah Meadows

Should we just start calling them non-void types? In any case, I think explicitly defining non-void with T! or !T is a mistake. It's difficult to read as a human, and also difficult to deal with all of the cases for the TypeScript compiler.

The main issue I see with non-void types by default is that it is a breaking change. Well what if we just add in some more static analysis, similar to Flow, that doesn't change the behavior at all, but will catch more bugs? Then we can catch most many of the bugs of this class now, but don't change the syntax, and in the future, it will be much easier to introduce a compiler flag or default behavior that is less of a breaking change.

``` .ts
// function compiles happily
function len(x: string): number {
return x.length;
}

len("works"); // 5
len(null); // error, no property length of null

``` .ts
function len(x: string): number {
    if (x === null) {
        return -1;
    }
    return x.length;
}

len("works"); // 5
len(null); // null

What would really be going on here is modeling the input data as non-void, but adding void implicitly when it is handled in the function. Similarly, the return type is non-void unless it can explicitly return null or undefined

We can also add the ?T or T? type, which forces the null (and/or undefined) check before use. Personally I like T?, but there is precedent to use ?T with Flow.

``` .ts
function len(x: ?string): number {
return x.length; // error: no length property on type ?string, you must use a type guard
}

One more example -- what about using function results?

``` .js
function len(x: string): number {
    return x.length;
}

function identity(f: string): string {
    return f;
}

function unknown(): string {
    if (Math.random() > 0.5) {
        return null;
    }
    return "maybe";
}

len("works"); // 5
len(null); // error, no property length of null

identity("works"); // "works": string
identity(null); // null: void
unknown(); // ?string

len(identity("works")); // 5
len(identity(null)); // error, no property length of null
len(unknown()); // error: no length property on type ?string, you must use a type guard

Under the hood, what's really going on here is that TypeScript is inferring whether or not a type can be null by seeing whether it handles null, and is given a possibly null value.

The only tricky part here is how to interface with definition files. I think this can be solved by having the default be to assume that a definition file declaring a function(t: T) does null/void checking, much like the second example. This means those functions will be able to take null values without the compiler generating an error.

This now allows two things:

  1. Gradual adoption of the ?T type syntax, of which optional parameters would already be swapped to.
  2. In the future a compiler flag --noImplicitVoid, could be added which would treat declaration files the same as compiled code files. This would be "breaking", but if done far down the road, the majority of libraries will adopt the best practice of using ?T when the type can be void and T when it cannot. It would also be opt-in, so only those who choose to use it would be affected. This could also require the ?T syntax be used in the case where an object could be void.

I think this is a realistic approach, as it gives much improved safety when the source is available in TypeScript, finding those tricky issues, while still allowing easy, fairly intuitive, and backwards compatible integration with definition files.

The prefix ? variant is also used in Closure Compiler annotations, IIRC.

On Tue, Nov 17, 2015, 13:37 Tom Jacques [email protected] wrote:

Should we just start calling them non-void types? In any case, I think
explicitly defining non-void with T! or !T is a mistake. It's difficult
to read as a human, and also difficult to deal with all of the cases for
the TypeScript compiler.

The main issue I see with non-void types by default is that it is a
breaking change. Well what if we just add in some more static analysis,
similar to Flow, that doesn't change the behavior at all, but will catch
more bugs? Then we can catch most many of the bugs of this class now, but
don't change the syntax, and in the future, it will be much easier to
introduce a compiler flag or default behavior that is less of a breaking
change.

// function compiles happilyfunction len(x: string): number {
return x.length;
}

len("works"); // 5
len(null); // error, no property length of null

function len(x: string): number {
if (x === null) {
return -1;
}
return x.length;
}

len("works"); // 5
len(null); // null

What would really be going on here is modeling the input data as non-void,
but adding void implicitly when it is handled in the function. Similarly,
the return type is non-void unless it can explicitly return null or
undefined

We can also add the ?T or T? type, which forces the null (and/or
undefined) check before use. Personally I like T?, but there is precedent
to use ?T with Flow.

function len(x: ?string): number {
return x.length; // error: no length property on type ?string, you must use a type guard
}

One more example -- what about using function results?

function len(x: string): number {
return x.length;
}
function identity(f: string): string {
return f;
}
function unknown(): string {
if (Math.random() > 0.5) {
return null;
}
return "maybe";
}

len("works"); // 5
len(null); // error, no property length of null

identity("works"); // "works": string
identity(null); // null: void
unknown(); // ?string

len(identity("works")); // 5
len(identity(null)); // error, no property length of null
len(unknown()); // error: no length property on type ?string, you must use a type guard

Under the hood, what's really going on here is that TypeScript is
inferring whether or not a type can be null by seeing whether it handles
null, and is given a possibly null value.

The only tricky part here is how to interface with definition files. I
think this can be solved by having the default be to assume that a
definition file declaring a function(t: T) does null/void checking, much
like the second example. This means those functions will be able to take
null values without the compiler generating an error.

This now allows two things:

  1. Gradual adoption of the ?T type syntax, of which optional
    parameters would already be swapped to.
  2. In the future a compiler flag --noImplicitVoid, could be added
    which would treat declaration files the same as compiled code files. This
    would be "breaking", but if done far down the road, the majority of
    libraries will adopt the best practice of using ?T when the type can
    be void and T when it cannot. It would also be opt-in, so only those
    who choose to use it would be affected.

I think this is a realistic approach, as it gives much improved safety
when the source is available in TypeScript, finding those tricky issues,
while still allowing easy, fairly intuitive, and backwards compatible
integration with definition files.


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-157463734
.

Good point. There's also a lot of similarities with the existing optional parameter declarations:

``` .ts
interface withOptionalProperty {
o?: string
}
interface withVoidableProperty {
o: ?string
}

function withOptionalParam(o?: string) { }
function withVoidableParam(o: ?string) { }
```

Actually, they do use prefix. They use ? for explicit nullable types and ! for non-nullable types, with nullable being the default.

The voidable vs nullable distinction makes a lot of sense. :+1:

I feel like this is going around in circles.

Have you all read above why a voidable/non-voidable definition is not going to solve the underlying problem?

@Griffork I've read every comment. I'll admit that what I said is somewhat of a rehashing / combination of what others have said, but I think it is the most realistic path forward. I don't know what you see as the underlying problem, but to me the underlying problem is that null and undefined are part of every type, and the compiler does not currently ensure safety when trying to use an argument of type T. As per my example:

``` .ts
function len(x: string): number {
return x.length;
}

len("works");
// No error -- 5

len(null);
// Compiler allows this but should error here with something like
// error: no property 'length' of null

len(undefined);
// Compiler allows this but should error here with something like
// error: no property 'length' of undefined
```

That's my take on the fundamental problem with the _language_ -- the lack of safety. A huge amount of this can be fixed by looking at the flow of arguments into functions using static and type analysis, and giving an error when something could be done unsafely. There doesn't need to be any ?T type at all when the code is available because that analysis can be performed (albeit not in every case perfectly accurately all of the time). The reason for adding a ?T type is because it forces the safety check and makes the programmer's intention very clear.

The problem with the _implementation_ is backwards compatibility. If the TS team were to tomorrow release a change that now a type T is non-void by default, it would break existing code which currently accept and handle void inputs with those type signatures. TS team members have stated in this very issue they are not willing to make such a large breaking change. They are willing to break some things, if the effect is small enough and the benefit is large enough, but this would have too large an impact.

My proposal is in two separate parts, one of which I think would be a great addition to the language which changes none of the existing syntax/semantics except for finding real true bugs, and the other is a possible way to reduce the impact of the breaking change to get the stronger guarantees of non-void types in the future. It is still a breaking change, but of a hopefully more acceptable size and nature.

Part One:

  • Add analysis to identify when null/undefined/void types could be passed to functions which do not handle them (only works when TS code is present, not in definition files).
  • Add ?T type which forces void check before using the argument. This is really just syntactic sugar around a language-level option type.
  • These two features can be implemented independently as each have their own individual merit

Part Two:

  • Wait. Later down the line after Part One is introduced, and the community and users of TS have had time to use those features, the standard will be to use T when the type is not null, and ?T when the type could be null. This isn't guaranteed, but I think this would be a clear obvious best practice.
  • As a separate feature, add a compiler option --noImplicitVoid which requires types to be ?T if they can be void. This is just the compiler enforcing the already existing best-practice. If there is a definition file that doesn't adhere to the best practice it would be incorrect, but that's why it is opt-in.
  • If you really wanted to be strict, the flag could accept arguments specifying which directories/files it should apply to. Then you could apply the change only to your code, and exclude node_modules.

I think this is the most realistic option because Part One can be done even without the second part. It's still a good feature that will largely mitigate this issue. Sure it's not perfect, but I'll take good enough if it means it can happen. It also leaves the option on the table for true non-null types in the future. A problem right now is the longer this problem persists, the more of a breaking change it is to fix it because there is more code being written in TS. With Part One, it at worst should dramatically slow that down, and at best reduce the impact over time.

@tejacques I, for one, am totally on board with this.

@tejacques - FWIW, I completely agree with your assessment and proposal. Lets hope the TS team agrees :)

Actually, two things:

One, I'm not sure the Flow-like analysis mentioned Part One is necessary. While it's very cool and useful, I certainly wouldn't want it to hold up voidable/?T types, which seem much more feasible in the current design of the language and provide much more long-term value.

I would also phrase --noImplicitVoid a bit differently - let's say it disallows assigning null and undefined to non-voidable (that is, default) types rather than "requiring types to be ?T if they can be void". I'm pretty sure we mean the same thing, just semantics; focuses on usage rather than definition, which, if I understand how TS works, is the only things that it actually can enforce.

And something just now came to mind: we would have then four levels of voidability (this is also true of Flow, which I think is inspiring a good deal of this conversation):

interface Foo {
  w: string;
  x?: string;
  y: ?string;
  z?: ?string;
}

Under --noImplicitVoid, w can only be a valid string. This is a huge win for type safety. Goodbye, billion dollar mistake! Without --noImplicitVoid, of course, the only constraint is that it must be specified, but it can be null or undefined. This is a rather dangerous behavior of the language, I think, because it looks like it's guaranteeing more than it really is.

x is completely lenient under current settings. It can be a string, null, undefined, and it need not even be present in objects that implement it. Under --noImplicitVoid, things get a little more complex... you might not define it, but if you _do_ set something to it, can it not be void? I think the way Flow handles this is that you could set x to undefined (imitating non-existence), but not null. This might be a little bit too opinionated for TypeScript, though.

Also, it would make logical sense to require void-checking x before using it, but that again would be a breaking change. Could this behavior be part of the --noImplicitVoid flag?

y can be set to any string or void value, but _must_ be present in some form, even if void. And, of course, accessing it requires a void-check. This behavior might be a bit surprising. Should we consider _not_ setting it to be the same as setting it to undefined? If so, what would make it different from x, except requiring a void check?

And finally, z need not be specified, can be set to absolutely anything (well, except a non-string of course), and (for good reason) requires a void check before accessing it. Everything makes sense here!

There's a bit of overlap between x and y, and I suspect that x would eventually become deprecated by the community, preferring the z form for maximum safety.

@tejacques

There doesn't need to be any ?T type at all when the code is available because that analysis can be performed (albeit not in every case perfectly accurately all of the time).

This statement is incorrect. Lots of aspects of nullability can't be accounted for even with full code source available.

For instance: when calling into an interface (statically), you can't know if passing null is alright or not if the interface is not annotated accordingly. In a structural type system such as TS, it's not even easy to know which objects are implementations of the interface and which are not. In general you don't care, but if you want to deduce nullability from the source code, you would.

Another example: if I have an array list and a function unsafe(x) whose source code shows that it doesn't accept a null argument, the compiler can't say if that line is safe or not: list.filter(unsafe). And in fact, unless you can statically know what all possible contents of list will be, this can't be done.

Other cases are linked to inheritance and more.

I'm not saying that a code analysis tool that would flag blatant violation of null contracts has no value (it does). I'm just pointing out that down-playing the usefulness of null annotations when source code is available is IMHO an error.

I had said somewhere in this discussion that I think inference of nullability could help reduce backward compatibility in many simple cases. But it can't completely replace source annotations.

@tejacques my bad, I misread your comment (for some reason my brain decided void===null :( I blame just waking up).
Thanks for the extra post though, it made far more sense to me than your original post. I actually quite like that idea.

I'm going to reply to the three of you separately because writing it out on one posts is an unreadable blob.

@Griffork No problem. Sometimes it's hard to convey everything properly through text and I think it was worth clarifying anyway.

@dallonf Your rephrasing is exactly what I mean -- we're on the same page.

I think the difference between x?: T and y: ?T would be in the tooltips/function usage, and in the typing and guard used.

Declaring as an optional argument changes the tooltips/function usage so it's clear it is optional:
a?: T does not need to be passed as an argument in the function call, can be left blank
If a declaration does not have ?: it must be passed as an argument in the function call, and cannot be left blank

w: T is a required non-void argument (with --noImplicitVoid)
does not require a guard

x?: T is an optional argument, so the type is really T | undefined
requires an if (typeof x !== 'undefined') guard.
Note the triple glyph !== for exact checking on undefined.

y: ?T is a required argument, and the type is really T | void
requires an if (y == null) guard.
Note the double glyph == which matches both null and undefined i.e. void

z?: ?T is an optional argument, and the type is really T | undefined | void which is T | void
requires an if (z == null) guard.
Note again the double glyph == which matches both null and undefined i.e. void

As with all optional arguments, you can't have required arguments that follow optional ones. So that's what the difference would be. Now, you could also just do a null guard on the optional argument, and that would work, too, but a key difference is that you could not pass the null value to the function if you called it; however, you could pass undefined.

I think all of these actually have a place, so it wouldn't necessarily deprecate the current optional argument syntax, but I agree that the z form is the safest.

Edit: Updated wording and fixed some typos.

@jods4 I agree with pretty much everything you said. I'm not trying to downplay the importance of non-void typing. I'm just trying to push it one phase at a time. If the TS team can't do it later, at least we're better off, and if they can do it down the road after implementing more checks and ?T then that's mission accomplished.

I think that the case with Arrays is a really tricky one indeed. You could always do something like this:

``` .ts
function numToString(x: number) {
return x.toString();
}
var nums: number[] = Array(100);
numToString(nums[0]); // You are screwed!

You can try to do something specifically for uninitialized arrays, like typing the `Array` function as `Array<?T>` / `?T[]` and upgrading it to `T[]` after a for-loop initializing it, but I agree that you can't catch everything. That said, that's already a problem anyway, and arrays typically don't even send uninitialized values to `map`/`filter`/`forEach`.

Here's an example -- the output is the same on Node/Chrome/IE/FF/Safari.

``` .ts
function timesTwo(x: number) {
    return x * 2;
}
function all(x) {
    return true;
}
var nums: number[] = Array(100);
nums.map(timesTwo);
// [undefined x 100]
nums.filter(all);
// []
nums.forEach(function(x) { console.log(x); })
// No output

That's really not helping you that much, since it is unexpected, but it's not an error in real JavaScript today.

The only other thing I want to stress is that you can make progress even with interfaces, it's just a lot more work and effort via static analysis than via type system, but it's not too dissimilar to what already happens now.

Here's an example. Let's assume that --noImplicitVoid is off

``` .ts
interface ITransform {
(x: T): U;
}

interface IHaveName {
name: string;
}

function transform(x: T, fn: ITransform) {
return fn(x);
}

var named = {
name: "Foo"
};

var wrongName = {
name: 1234
};

var namedNull = {
name: null
};

var someFun = (x: IHaveName) => x.name;
var someFunHandlesVoid = (x: IHaveName) => {
if (x != null && x.name != null) {
return x.name;
}
return "No Name";
};

All of the above code compiles just fine -- no issues. Now let's try using it

``` .ts
someFun(named);
// "Foo"
someFun(wrongName);
// error TS2345: Argument of type '{ name: number; }' is not assignable to parameter
// of type 'IHaveName'.
//   Types of property 'name' are incompatible.
//     Type 'number' is not assignable to type 'string'.
someFun(null);
// Not currently an error, but would be something like this:
// error TS#: Argument of type 'null' is not assignale to parameter of type 'IHaveName'.
someFun(namedNull);
// Not currently an error, but would be something like this:
// error TS#: Argument of type '{ name: null; }' is not assignable to parameter of
// type 'IHaveName'.
//   Types of property 'name' are incompatible.
//     Type 'null' is not assignable to type 'string'.

someFunHandlesVoid(named);
// "Foo"
someFunHandlesVoid(wrongName);
// error TS2345: Argument of type '{ name: number; }' is not assignable to parameter
// of type 'IHaveName'.
someFunHandlesVoid(null);
// "No Name"
someFunHandlesVoid(namedNull);
// "No Name"

transform(named, someFun);
// "Foo"
transform(wrongName, someFun);
// error TS2453: The type argument for type parameter 'T' cannot be inferred from the usage.
// Consider specifying the type arguments explicitly.
//   Type argument candidate '{ name: number; }' is not a valid type argument because it
//   is not a supertype of candidate 'IHaveName'.
//     Types of property 'name' are incompatible.
//       Type 'string' is not assignable to type 'number'.
transform(null, someFun);
// Not currently an error, but would be something like this:
// error TS#: The type argument for type parameter 'T' cannot be inferred from the usage.
// Consider specifying the type arguments explicitly.
//   Type argument candidate 'null' is not a valid type argument because it
//   is not a supertype of candidate 'IHaveName'.
transform(namedNull, someFun);
// Not currently an error, but would be something like this:
// error TS#: The type argument for type parameter 'T' cannot be inferred from the usage.
// Consider specifying the type arguments explicitly.
//   Type argument candidate '{ name: null; }' is not a valid type argument because it
//   is not a supertype of candidate 'IHaveName'.
//     Types of property 'name' are incompatible.
//       Type 'string' is not assignable to type 'null'.

transform(named, someFunHandlesVoid);
// "Foo"
transform(wrongName, someFunHandlesVoid);
// error TS2453: The type argument for type parameter 'T' cannot be inferred from the usage.
// Consider specifying the type arguments explicitly.
//   Type argument candidate '{ name: number; }' is not a valid type argument because it
//   is not a supertype of candidate 'IHaveName'.
transform(null, someFunHandlesVoid);
// "No Name"
transform(namedNull, someFunHandlesVoid);
// "No Name"

You're right that you can't catch everything, but you can catch a lot of stuff.

Final note -- what should the behavior of the above be when --noImplicitVoid is on?

Now someFun and someFunHandlesVoid are both typechecked the same and produce the same error messages that someFun produced. Even though someFunHandlesVoid does handle void, calling it with a null or undefined is an error because the signature states it takes non void. It would need to be typed as (x: ?IHaveName) : string to accept null or undefined. If we change it's type, then it continues to work as it did before.

This is the part that is a breaking change, but all we have to do to fix it was add a single character ? to the type signature. We can even have another flag --warnImplicitVoid which does the same thing as a warning so we can slowly migrate over.

I feel like a total jerk for doing this, but I'm going to make one more post.

At this point I'm not sure what to do to proceed. Is there a better idea? Should we:

  • keep discussing/speccing out how this should behave?
  • turn this into three new feature proposals?

    • Enhanced Analysis

    • Maybe/Option type ?T

    • --noImplicitVoid compiler option

  • ping TypeScript team members for input?

I'm leaning towards new proposals and continuing discussion there since it's almost inhumane to ask the TypeScript team to catch up on this thread considering how long it is.

@tejacques

  • You're missing a typeof in the triple equals example to dallonf.
  • You appear to be missing some ? in the example to jods4.

Much as I think the stuff should stay in this thread, I think this thread isn't really being "watched" any more (maybe more like glanced at occasionally). So creating some new threads would definitely build traction.
But wait a few days/a week for people to pop their heads up and supply feedback first. You will want your proposal to be pretty solid.

Edit: remembered markdown exists.

Commenting on this thread is pretty futile at this stage. Even if a proposal is made which the TypeScript team considers acceptable (I attempted this above in August) there's no way they'll find it among the noise.

The best you can hope is that the level of attention is enough of a prompt for the TypeScript team to come up with their own proposal and implement it. Otherwise, just forget about it and use Flow.

+1 for splitting this up, but for now, the --noImplicitVoid option can
wait for the nullable type to be implemented.

So far, we've mostly come to agreement on the syntax and semantics of
nullable types, so if someone could write out a proposal and implementation
of it, that would be golden. I've got a proposal from a similar process
regarding enums of other types, but I just haven't had the time to
implement it due to other projects.

On Wed, Nov 18, 2015, 21:24 Tom Jacques [email protected] wrote:

I feel like a total jerk for doing this, but I'm going to make one more
post.

At this point I'm not sure what to do to proceed. Is there a better idea?
Should we:

  • keep discussing/speccing out how this should behave?
  • turn this into three new feature proposals?

    • Enhanced Analysis

    • Maybe/Option type ?T

    • --noImplicitVoid compiler option

  • ping TypeScript team members for input?

I'm leaning towards new proposals and continuing discussion there since
it's almost inhumane to ask the TypeScript team to catch up on this thread
considering how long it is.


Reply to this email directly or view it on GitHub
https://github.com/Microsoft/TypeScript/issues/185#issuecomment-157928828
.

+1 for --noImplicitNull option(disallow void and null assignment).

I attempted to mitigate this problem with a special type Op<A> = A | NullType. It seems to work pretty well. See here.

+1 for _--noImplicitNull_ as well PLEASE :+1:

+1 for --noImplicitNull

Should this be closed?

@Gaelan Given #7140 is merged, if you would like to file a new, dedicated issue for --noImplicitNull as suggested by a few people here, then it's probably safe to do so now.

@isiahmeadows It would probably be better to leave this open then.

Should this be closed?

We think https://github.com/Microsoft/TypeScript/issues/2388 is the renaming part of this work. This is why we have not declared this feature complete yet.

if you would like to file a new, dedicated issue for --noImplicitNull as suggested by a few people here, then it's probably safe to do so now.

I am not sure i understand what is requested semantics of this new flag. i would recommend opening a new issue with a clear proposal.

@mhegazy The idea posited earlier in this issue for --noImplicitNull was that everything has to be explicitly ?Type or !Type. IMHO I don't feel it's worth the boilerplate when there's another flag that infers non-nullable by default that IIRC was already implemented when nullable types themselves were.

Closing now that #7140 and #8010 are both merged.

Sorry if I comment on a closed issue but I don't know a better place where to ask and I don't think this is worth a new issue if there's no interest.
Would it be feasible to handle implicit null on a per-file basis?
Like, handle a bunch of td files with noImplicitNull (because they come from definitelytyped and were conceived that way) but handle my source as implicitNull?
Would anybody find this useful?

@massimiliano-mantione, please see https://github.com/Microsoft/TypeScript/issues/8405

Was this page helpful?
0 / 5 - 0 ratings