Rust: Tracking issue for async/await (RFC 2394)

Created on 8 May 2018  ·  308Comments  ·  Source: rust-lang/rust

This is the tracking issue for RFC 2394 (rust-lang/rfcs#2394), which adds async and await syntax to the language.

I will be spearheading the implementation work of this RFC, but would appreciate mentorship as I have relatively little experience working in rustc.

TODO:

Unresolved questions:

A-async-await A-generators AsyncAwait-Triaged B-RFC-approved C-tracking-issue T-lang

Most helpful comment

About syntax: I'd really like to have await as simple keyword. For example, let's look on a concern from the blog:

We aren’t exactly certain what syntax we want for the await keyword. If something is a future of a Result - as any IO future likely to be - you want to be able to await it and then apply the ? operator to it. But the order of precedence to enable this might seem surprising - await io_future? would await first and ? second, despite ? being lexically more tightly bound than await.

I agree here, but braces are evil. I think it's easier to remember that ? has lower precedence than await and end with it:

let foo = await future?

It's easier to read, it's easier to refactor. I do believe it's the better approach.

let foo = await!(future)?

Allows to better understand an order in which operations are executed, but imo it's less readable.

I do believe that once you get that await foo? executes await first then you have no problems with it. It's probably lexically more tied, but await is on the left side and ? is on the right one. So it's still logical enough to await first and handle Result after it.


If any disagreement exist, please express them so we can discuss. I don't understanda what's silent downvote stands for. We all wish good to the Rust.

All 308 comments

The discussion here seems to have died down, so linking it here as part of the await syntax question: https://internals.rust-lang.org/t/explicit-future-construction-implicit-await/7344

Implementation is blocked on #50307.

About syntax: I'd really like to have await as simple keyword. For example, let's look on a concern from the blog:

We aren’t exactly certain what syntax we want for the await keyword. If something is a future of a Result - as any IO future likely to be - you want to be able to await it and then apply the ? operator to it. But the order of precedence to enable this might seem surprising - await io_future? would await first and ? second, despite ? being lexically more tightly bound than await.

I agree here, but braces are evil. I think it's easier to remember that ? has lower precedence than await and end with it:

let foo = await future?

It's easier to read, it's easier to refactor. I do believe it's the better approach.

let foo = await!(future)?

Allows to better understand an order in which operations are executed, but imo it's less readable.

I do believe that once you get that await foo? executes await first then you have no problems with it. It's probably lexically more tied, but await is on the left side and ? is on the right one. So it's still logical enough to await first and handle Result after it.


If any disagreement exist, please express them so we can discuss. I don't understanda what's silent downvote stands for. We all wish good to the Rust.

I have mixed views on await being a keyword, @Pzixel. While it certainly has an aesthetic appeal, and is perhaps more consistent, given async is a keyword, "keyword bloat" in any language is a real concern. That said, does having async without await even make any sense, feature wise? If it does, perhaps we can leave it as is. If not, I'd lean towards making await a keyword.

I think it's easier to remember that ? has lower precedence than await and end with it

It might be possible to learn that and internalise it, but there's a strong intuition that things that are touching are more tightly bound than things that are separated by whitespace, so I think it would always read wrong on first glance in practice.

It also doesn't help in all cases, e.g. a function that returns a Result<impl Future, _>:

let foo = await (foo()?)?;

The concern here is not simply "can you understand the precedence of a single await+?," but also "what does it look like to chain several awaits." So even if we just picked a precedence, we would still have the problem of await (await (await first()?).second()?).third()?.

A summary of the options for await syntax, some from the RFC and the rest from the RFC thread:

  • Require delimiters of some kind: await { future }? or await(future)? (this is noisy).
  • Simply pick a precedence, so that await future? or (await future)? does what is expected (both of these feel surprising).
  • Combine the two operators into something like await? future (this is unusual).
  • Make await postfix somehow, as in future await? or future.await? (this is unprecedented).
  • Use a new sigil like ? did, as in future@? (this is "line noise").
  • Use no syntax at all, making await implicit (this makes suspension points harder to see). For this to work, the act of constructing a future must also be made explicit. This is the subject of the internals thread I linked above.

That said, does having async without await even make any sense, feature wise?

@alexreg It does. Kotlin works this way, for example. This is the "implicit await" option.

@rpjohnst Interesting. Well, I'm generally for leaving async and await as explicit features of the language, since I think that's more in the spirit of Rust, but then I'm no expert on asynchronous programming...

@alexreg async/await is really nice feature, as I work with it on day-to-day basis in C# (which is my primary language). @rpjohnst classified all possibilities very well. I prefer the second option, I agree on others considerations (noisy/unusual/...). I have been working with async/await code for last 5 years or something, it's really important to have such a flag keywords.

@rpjohnst

So even if we just picked a precedence, we would still have the problem of await (await (await first()?).second()?).third()?.

In my practice you never write two await's in one line. In very rare cases when you need it you simply rewrite it as then and don't use await at all. You can see yourself that it's much harder to read than

let first = await first()?;
let second = await first.second()?;
let third = await second.third()?;

So I think it's ok if language discourages to write code in such manner in order to make the primary case simpler and better.

hero away future await? looks interesting although unfamiliar, but I don't see any logical counterarguments against that.

In my practice you never write two await's in one line.

But is this because it's a bad idea regardless of the syntax, or just because the existing await syntax of C# makes it ugly? People made similar arguments around try!() (the precursor to ?).

The postfix and implicit versions are far less ugly:

first().await?.second().await?.third().await?
first()?.second()?.third()?

But is this because it's a bad idea regardless of the syntax, or just because the existing await syntax of C# makes it ugly?

I think it's a bad idea regardless of the syntax because having one line per async operation is already complex enough to understand and hard to debug. Having them chained in a single statement seems to be even worse.

For example let's take a look on real code (I have taken one piece from my project):

[Fact]
public async Task Should_UpdateTrackableStatus()
{
    var web3 = TestHelper.GetWeb3();
    var factory = await SeasonFactory.DeployAsync(web3);
    var season = await factory.CreateSeasonAsync(DateTimeOffset.UtcNow, DateTimeOffset.UtcNow.AddDays(1));
    var request = await season.GetOrCreateRequestAsync("123");

    var trackableStatus = new StatusUpdate(DateTimeOffset.UtcNow, Request.TrackableStatuses.First(), "Trackable status");
    var nonTrackableStatus = new StatusUpdate(DateTimeOffset.UtcNow, 0, "Nontrackable status");

    await request.UpdateStatusAsync(trackableStatus);
    await request.UpdateStatusAsync(nonTrackableStatus);

    var statuses = await request.GetStatusesAsync();

    Assert.Single(statuses);
    Assert.Equal(trackableStatus, statuses.Single());
}

It shows that in practice it doesn't worth to chain awaits even if syntax allows it, because it would become completely unreadable await just makes oneliner even harder to write and read, but I do believe it's not the only reason why it's bad.

The postfix and implicit versions are far less ugly

Possibility to distinguish task start and task await is really important. For example, I often write code like that (again, a snippet from the project):

public async Task<StatusUpdate[]> GetStatusesAsync()
{
    int statusUpdatesCount = await Contract.GetFunction("getStatusUpdatesCount").CallAsync<int>();
    var getStatusUpdate = Contract.GetFunction("getStatusUpdate");
    var tasks = Enumerable.Range(0, statusUpdatesCount).Select(async i =>
    {
        var statusUpdate = await getStatusUpdate.CallDeserializingToObjectAsync<StatusUpdateStruct>(i);
        return new StatusUpdate(XDateTime.UtcOffsetFromTicks(statusUpdate.UpdateDate), statusUpdate.StatusCode, statusUpdate.Note);
    });

    return await Task.WhenAll(tasks);
}

Here we are creating N async requests and then awaiting them. We don't await on each loop iteration, but firstly we create array of async requests and then await them all at once.

I don't know Kotlin, so maybe they resolve this somehow. But I don't see how you can express it if "running" and "awaiting" the task is the same.


So I think that implicit version is a no-way in even much more implicit languages like C#.
In Rust with its rules that doesn't even allow you to implicitly convert u8 to i32 it would be much more confusing.

@Pzixel Yeah, the second option sounds like one of the more preferable ones. I've used async/await in C# too, but not very much, since I haven't programmed principally in C# for some years now. As for precedence, await (future?) is more natural to me.

@rpjohnst I kind of like the idea of a postfix operator, but I'm also worried about readability and assumptions people will make – it could easily get confused for a member of a struct named await.

Possibility to distinguish task start and task await is really important.

For what it's worth, the implicit version does do this. It was discussed to death both in the RFC thread and in the internals thread, so I won't go into a lot of detail here, but the basic idea is only that it moves the explicitness from the task await to task construction- it doesn't introduce any new implicitness.

Your example would look something like this:

pub async fn get_statuses() -> Vec<StatusUpdate> {
    // get_status_updates is also an `async fn`, but calling it works just like any other call:
    let count = get_status_updates();

    let mut tasks = vec![];
    for i in 0..count {
        // Here is where task *construction* becomes explicit, as an async block:
        task.push(async {
            // Again, simply *calling* get_status_update looks just like a sync call:
            let status_update: StatusUpdateStruct = get_status_update(i).deserialize();
            StatusUpdate::new(utc_from_ticks(status_update.update_date), status_update.status_code, status_update.note))
        });
    }

    // And finally, launching the explicitly-constructed futures is also explicit, while awaiting the result is implicit:
    join_all(&tasks[..])
}

This is what I meant by "for this to work, the act of constructing a future must also be made explicit." It's very similar to working with threads in sync code- calling a function always waits for it to complete before resuming the caller, and there are separate tools for introducing concurrency. For example, closures and thread::spawn/join correspond to async blocks and join_all/select/etc.

For what it's worth, the implicit version does do this. It was discussed to death both in the RFC thread and in the internals thread, so I won't go into a lot of detail here, but the basic idea is only that it moves the explicitness from the task await to task construction- it doesn't introduce any new implicitness.

I believe it does. I can't see here what flow would be in this function, where is points where execution breaks until await is completed. I only see async block which says "hello, somewhere here there are async functions, try to find out which ones, you will be surprised!".

Another point: Rust tend to be a language where you can express everything, close to bare metal and so on. I'd like to provide some quite artificial code, but I think it illustrates the idea:

var a = await fooAsync(); // awaiting first task
var b = barAsync(); //running second task
var c = await bazAsync(); // awaiting third task
if (c.IsSomeCondition && !b.Status = TaskStatus.RanToCompletion) // if some condition is true and b is still running
{
   var firstFinishedTask = await Task.Any(b, Task.Delay(5000)); // waiting for 5 more seconds;
   if (firstFinishedTask != b) // our task is timeouted
      throw new Exception(); // doing something
   // more logic here
}
else
{
   // more logic here
}

Rust always tends to provide full control over what's happening. await allow you to specify points where continuation process. It also allows you to unwrap a value inside future. If you allows implicit conversion on use side, it has several implications:

  1. First of all, you have to write some dirty code to just emulate this behaviour.
  2. Now RLS and IDEs should expect that our value is either Future<T> or awaited T itself. It's not an issue with keywords - it it exists, then result isT, otherwise it's Future<T>
  3. It makes code harder to understand. In you example I don't see why it does interrupt execution at get_status_updates line, but it doesn't on get_status_update. They are quite similar to each other. So it's either doesn't work the way original code was or it's so much complicated that I can't see it even when I'm quite familiar with the subject. Both alternatives don't make this option a favor.

I can't see here what flow would be in this function, where is points where execution breaks until await is completed.

Yes, this is what I meant by "this makes suspension points harder to see." If you read the linked internals thread, I made an argument for why this isn't that big of a problem. You don't have to write any new code, you just put the annotations in a different place (async blocks instead of awaited expressions). IDEs have no problem telling what the type is (it's always T for function calls and Future<Output=T> for async blocks).

I will also note that your understanding is probably wrong regardless of the syntax. Rust's async functions do not run any code at all until they are awaited in some way, so your b.Status != TaskStatus.RanToCompletion check will always pass. This was also discussed to death in the RFC thread, if you're interested in why it works this way.

In you example I don't see why it does interrupt execution at get_status_updates line, but it doesn't on get_status_update. They are quite similar to each other.

It does interrupt execution in both places. The key is that async blocks don't run until they are awaited, because this is true of all futures in Rust, as I described above. In my example, get_statuses calls (and thus awaits) get_status_updates, then in the loop it constructs (but does not await) count futures, then it calls (and thus awaits) join_all, at which point those futures concurrently call (and thus await) get_status_update.

The only difference with your example is when exactly the futures start running- in yours, it's during the loop; in mine, it's during join_all. But this is a fundamental part of how Rust futures work, not anything to do with the implicit syntax or even with async/await at all.

I will also note that your understanding is probably wrong regardless of the syntax. Rust's async functions do not run any code at all until they are awaited in some way, so your b.Status != TaskStatus.RanToCompletion check will always pass.

Yes, C# tasks are executed synchronously until first suspension point. Thank you for pointing that out.
However, it doesn't really matter because I still should be able to run some task in background while executing the rest of the method and then check if background task is finished. E.g. it could be

var a = await fooAsync(); // awaiting first task
var b = Task.Run(() => barAsync()); //running background task somehow
// the rest of the method is the same

I've got your idea about async blocks and as I see they are the same beast, but with more disadvantages. In original proposal each async task is paired with await. With async blocks each task would be paired with async block at construction point, so we are in almost same situation as before (1:1 relationship), but even a bit worse, because it feel more unnatural, and harder to understand, because callsite behavior becomes context-depended. With await I can see let a = foo() or let b = await foo() and I would know it this task is just constructed or constructed and awaited. If i see let a = foo() with async blocks I have to look if there is some async above, if I get you right, because in this case

pub async fn get_statuses() -> Vec<StatusUpdate> {
    // get_status_updates is also an `async fn`, but calling it works just like any other call:
    let count = get_status_updates();

    let mut tasks = vec![];
    for i in 0..count {
        // Here is where task *construction* becomes explicit, as an async block:
        task.push(async {
            // Again, simply *calling* get_status_update looks just like a sync call:
            let status_update: StatusUpdateStruct = get_status_update(i).deserialize();
            StatusUpdate::new(utc_from_ticks(status_update.update_date), status_update.status_code, status_update.note))
        });
    }

    // And finally, launching the explicitly-constructed futures is also explicit, while awaiting the result is implicit:
    join_all(&tasks[..])
}

We are awaiting for all tasks at once while here

pub async fn get_statuses() -> Vec<StatusUpdate> {
    // get_status_updates is also an `async fn`, but calling it works just like any other call:
    let count = get_status_updates();

    let mut tasks = vec![];
    for i in 0..count {
        // Isn't "just a construction" anymore
        task.push({
            let status_update: StatusUpdateStruct = get_status_update(i).deserialize();
            StatusUpdate::new(utc_from_ticks(status_update.update_date), status_update.status_code, status_update.note))
        });
    }
    tasks 
}

We are executing them one be one.

Thus I can't say what's exact behavior of this part:

let status_update: StatusUpdateStruct = get_status_update(i).deserialize();
StatusUpdate::new(utc_from_ticks(status_update.update_date), status_update.status_code, status_update.note))

Without having more context.

And things get more weird with nested blocks. Not to mention questions about tooling etc.

callsite behavior becomes context-depended

This is already true with normal sync code and closures. For example:

// Construct a closure, delaying `do_something_synchronous()`:
task.push(|| {
    let data = do_something_synchronous();
    StatusUpdate { data }
});

vs

// Execute a block, immediately running `do_something_synchronous()`:
task.push({
    let data = do_something_synchronous();
    StatusUpdate { data }
});

One other thing that you should note from the full implicit await proposal is that you can't call async fns from non-async contexts. This means that the function call syntax some_function(arg1, arg2, etc) always runs some_function's body to completion before the caller continues, regardless of whether some_function is async. So entry into an async context is always marked explicitly, and function call syntax is actually more consistent.

Regarding await syntax: What about a macro with method syntax? I can't find an actual RFC for allowing this, but I've found a few discussions (1, 2) on reddit so the idea is not unprecedented. This would allow await to work in postfix position without making it a keyword / introducing new syntax for only this feature.

// Postfix await-as-a-keyword. Looks as if we were accessing a Result<_, _> field,
// unless await is syntax-highlighted
first().await?.second().await?.third().await?
// Macro with method syntax. A few more symbols, but clearly a macro invocation that
// can affect control flow
first().await!()?.second().await!()?.third().await!()?

There is a library from the Scala-world which simplifies monad compositions: http://monadless.io

Maybe some ideas are interesting for Rust.

quote from the docs:

Most mainstream languages have support for asynchronous programming using the async/await idiom or are implementing it (e.g. F#, C#/VB, Javascript, Python, Swift). Although useful, async/await is usually tied to a particular monad that represents asynchronous computations (Task, Future, etc.).

This library implements a solution similar to async/await but generalized to any monad type. This generalization is a major factor considering that some codebases use other monads like Task in addition to Future for asynchronous computations.

Given a monad M, the generalization uses the concept of lifting regular values to a monad (T => M[T]) and unlifting values from a monad instance (M[T] => T). > Example usage:

lift {
  val a = unlift(callServiceA())
  val b = unlift(callServiceB(a))
  val c = unlift(callServiceC(b))
  (a, c)
}

Note that lift corresponds to async and unlift to await.

This is already true with normal sync code and closures. For example:

I see several differences here:

  1. Lambda context is unavoidable, but it's not for await. With await we don't have a context, with async we have to have one. The former wins, because it provide the same features, but require knowing less about the code.
  2. Lambdas tends to be short, several lines at most so we see the entire body at once, and simple. async functions may be quite big (as big, as regular functions) and complicated.
  3. Lambdas are rarely nested (except for then calls, but it's await is proposed for), async blocks are nested frequently.

One other thing that you should note from the full implicit await proposal is that you can't call async fns from non-async contexts.

Hmm, I didn't notice that. It doesn't sound good, because in my practice you often want to run async from non-async context. In C# async is just a keyword that allows compiler to rewrite function body, it doesn't affect function interface in any way so async Task<Foo> and Task<Foo> are completely interchangeable, and it decouples implementation and API.

Sometimes you may want to to block on async task, e.g when you want to call some network API from main. You have to block (otherwise you return to the OS and the program ends) but you have to run async HTTP request. I'm not sure what solution could be here except hacking main to allow it to be async as well as we do with Result main return type, if you cannot call it from non-async main.

Another consideration in favor of current await is how it works in other popular language (as noted by @fdietze ). It makes it easier to migrate from other language such as C#/TypeScript/JS/Python and thus is a better approach in terms of drumming up new people.

I see several differences here

You should also realize that the main RFC already has async blocks, with the same semantics as the implicit version, then.

It doesn't sound good, because in my practice you often want to run async from non-async context.

This is not an issue. You can still use async blocks in non-async contexts (which is fine because they just evaluate to a F: Future as always), and you can still spawn or block on futures using exactly the same API as before.

You just can't call async fns, but instead wrap the call to them in an async block- as you do regardless of the context you're in, if you want a F: Future out of it.

async is just a keyword that allows compiler to rewrite function body, it doesn't affect function interface in any way

Yes, this is is a legitimate difference between the proposals. It was also covered in the internals thread. Arguably, having different interfaces for the two is useful because it shows you that the async fn version will not run any code as part of construction, while the -> impl Future version may e.g. initiate a request before giving you a F: Future. It also makes async fns more consistent with normal fns, in that calling something declared as -> T will always give you a T, regardless of whether it's async.

(You should also note that in Rust there is still quite a leap between async fn and the Future-returning version, as described in the RFC. The async fn version does not mention Future anywhere in its signature; and the manual version requires impl Trait, which carries with it some problems to do with lifetimes. This is, in fact, part of the motivation for async fn to begin with.)

It makes it easier to migrate from other language such as C#/TypeScript/JS/Python

This is an advantage only for the literal await future syntax, which is fairly problematic on its own in Rust. Anything else we might end up with also has a mismatch with those languages, while implicit await at least has a) similarities with Kotlin and b) similarities with synchronous, thread-based code.

Yes, this is is a legitimate difference between the proposals. It was also covered in the internals thread. Arguably, having different interfaces for the two is useful

I'd say _having different interfaces for the two has some advantages_, because having API depended on implementation detail doesn't sound good to me. For example, you are writing a contract that is simply delegating a call to internal future

fn foo(&self) -> Future<T> {
   self.myService.foo()
}

And then you just want to add some logging

async fn foo(&self) -> T {
   let result = await self.myService.foo();
   self.logger.log("foo executed with result {}.", result);
   result
}

And it becomes a breaking change. Whoa?

This is an advantage only for the literal await future syntax, which is fairly problematic on its own in Rust. Anything else we might end up with also has a mismatch with those languages, while implicit await at least has a) similarities with Kotlin and b) similarities with synchronous, thread-based code.

It's an advantage for any await syntax, await foo/foo await/foo@/foo.await/... once you get that it's the same thing, the only difference is that you place it before/after or have a sigil instead of keyword.

You should also note that in Rust there is still quite a leap between async fn and the Future-returning version, as described in the RFC

I know it and it disquiets me a lot.

And it becomes a breaking change.

You can get around that by returning an async block. Under the implicit await proposal, your example looks like this:

fn foo(&self) -> impl Future<Output = T> { // Note: you never could return `Future<T>`...
    async { self.my_service.foo() } // ...and under the proposal you couldn't call `foo` outside of `async` either.
}

And with logging:

fn foo(&self) -> impl Future<Output = T> {
    async {
        let result = self.my_service.foo();
        self.logger.log("foo executed with result {}.", result);
        result
    }
}

The bigger issue with having this distinction arises during the transition of the ecosystem from manual future implementations and combinators (the only way today) to async/await. But even then the proposal allows you to keep the old interface around and provide a new async one alongside it. C# is full of that pattern, for example.

Well, that sounds reasonable.

However, I do believe such implicitness (we don't see if foo() here is async or sync function) lead to the same problems that arised in protocols such as COM+ and was a reason for WCF being implemented as it was. People had problems when async remote requests were looking like simple methods calls.

This code looks perfectly fine except I can't see if some request if async or sync. I believe that it's important information. For example:

fn foo(&self) -> impl Future<Output = T> {
    async {
        let result = self.my_service.foo();
        self.logger.log("foo executed with result {}.", result);
        let bars: Vec<Bar> = Vec::new();
        for i in 0..100 {
           bars.push(self.my_other_service.bar(i, result));
        }
        result
    }
}

It's crucial to know if bar is sync or async function. I often see await in the loop as a marker that this code have to be changed to achieve better throughout load and performance. This is a code I reviewed yesterday (code is suboptimal, but it's one of review iterations):

image

As you can see, I easily spotted that we have a looping await here and I asked to change it. When change was committed we got 3x page load speedup. Without await I could easily overlook this misbehaviour.

I admit I haven't used Kotlin, but last time I looked at that language, it seemed to be mostly a variant of Java with less syntax, up to the point where it was easy to mechanically translate one to the other. I can also imagine why it would be liked in the world of Java (which tends to be a little syntax-heavy), and I'm aware it recently got a boost in popularity specifically due being not Java (the Oracle vs. Google situation).

However, if we decide to take popularity and familiarity into account, we might want to take a look at what JavaScript does, which is also explicit await.

That said, await was introduced to mainstream languages by C#, which is maybe one language where usabilty was considered to be of utmost importance. In C#, asynchronous calls are indicated not only by the await keyword, but also by the Async suffix of the method calls. The other language feature that shares most with await, yield return is also proeminently visible in code.

Why is that? My take on it is that generators and asynchronous calls are too powerful constructs to let them pass unnoticed in code. There's a hierarchy of control flow operators:

  • sequential execution of statements (implicit)
  • function/method calls (quite apparent, compare with eg. Pascal where there's no difference at call site between a nullary function and a variable)
  • goto (all right, it's not a strict hierarchy)
  • generators (yield return tends to stand out)
  • await + Async suffix

Notice how they also go from less to more verbose, according to their expressiveness or power.

Of course, other languages took different approaches. Scheme continuations (like in call/cc, which isn't too different from await) or macros have no syntax to show what you are calling. For macros, Rust took the approach of making it easy to see them.

So I would argue that having less syntax isn't desirable in itself (there are languages like APL or Perl for that), and that syntax doesn't have to be just boilerplate, and has an important role in readability.

There's also a parallel argument (sorry, I can't remember the source, but it might have come from someone in the language team) that people are more comfortable with noisy syntax for new features when they are new, but then are fine with a less verbose one once they end up to be commonly used.


As for the question of await!(foo)? vs. await foo?, I'm in the former camp. You can internalise pretty much any syntax, however we are too used to taking cues from spacing and proximity. With await foo? there's a lange chance one will second-guess themselves on the precedence of the two operators, while the braces make it clear what's happening. Saving three characters isn't worth it. And as for the practice of chaining await!s, while it might be a popular idiom in some languages, I feel it has too many downsides like poor readability and interaction with debuggers to be worth optimizing for.

Saving three characters isn't worth it.

In my anecdotal experience, extra characters (e.g. longer names) aren't much of a problem, but extra tokens can be really annoying. In terms of a CPU analogy, a long name is straightline code with good locality - I can just type it out from muscle memory - while the same number of characters when it involves multiple tokens (e.g. punctuation) is branchy and full of cache misses.

(I fully agree that await foo? would be highly non-obvious and we should avoid it, and that having to type more tokens would be far preferable; my observation is only that not all characters are created equal.)


@rpjohnst I think your alternative proposal might have slightly better reception if it were presented as "explicit async" rather than "implicit await" :-)

It's crucial to know if bar is sync or async function.

I'm not sure this is really any different from knowing whether some function is cheap or expensive, or whether it does IO or not, or whether it touches some global state or not. (This also applies to @lnicola's hierarchy- if async calls run to completion just like sync calls, then they're really no different in terms of power!)

For example, the fact that the call was in a loop is just as, if not more, important than the fact that it was async. And in Rust, where parallelization is so much easier to get right, you could just as well go around suggesting that expensive-looking synchronous loops be switched to Rayon iterators!

So I don't think requiring await is actually all that important for catching these optimizations. Loops are already always good places to look for optimization, and async fns are already a good indicator that you can get some cheap IO concurrency. If you find yourself missing those opportunities, you could even write a Clippy lint for "async call in a loop" that you run occasionally. It would be great to have a lint similar for synchronous code as well!

The motivation for "explicit async" is not simply "less syntax," as @lnicola implies. It's to make the behavior of function call syntax more consistent, so that foo() always runs foo's body to completion. Under this proposal, leaving out an annotation just gives you less-concurrent code, which is how virtually all code already behaves. Under "explicit await," leaving out an annotation introduces accidental concurrency, or at least accidental interleaving, which is problematic.

I think your alternative proposal might have slightly better reception if it were presented as "explicit async" rather than "implicit await" :-)

The thread is named "explicit future construction, implicit await," but it seems the latter name has stuck. :P

I'm not sure this is really any different from knowing whether some function is cheap or expensive, or whether it does IO or not, or whether it touches some global state or not. (This also applies to @lnicola's hierarchy- if async calls run to completion just like sync calls, then they're really no different in terms of power!)

I think this is as important as know that function changes some state, and we alreay have a mut keyword on both call side an caller side.

The motivation for "explicit async" is not simply "less syntax," as @lnicola implies. It's to make the behavior of function call syntax more consistent, so that foo() always runs foo's body to completion.

One one side it's a good consideration. On the other one you can easily separate future creation and future run. I mean if foo returns you some abstraction that allows you then to call run and get some result it doesn't make foo useless trash that does nothing, it does a very useful thing: it construct some object you can call methods later on. It doesn't make it any different. The foo method we call is just a blackbox and we see its signature Future<Output=T> and it actually returns a future. So we explicitly await it when we want to do so.

The thread is named "explicit future construction, implicit await," but it seems the latter name has stuck. :P

I personally thing that the better alternative is "explicit async explicit await" :)


P.S.

I was also hitted by a thought tonight: did you try to communicate with C# LDM? For example, guys like @HaloFour , @gafter or @CyrusNajmabadi . It may be really good idea to ask them why they took syntax they took. I'd propose to ask guys from others languages as well but I merely don't know them :) I'm sure they had multiple debates about existing syntax and they could already discuss it a lot and they may have some useful ideas.

It doesn't mean Rust have to have this syntax because C# does, but it just allows to make more weighted decision.

I personally thing that the better alternative is "explicit async explicit await" :)

The main proposal isn't "explicit async," though- that's why I picked the name. It's "implicit async," because you can't tell at a glance where asynchrony is being introduced. Any unannotated function call might be constructing a future without awaiting it, even though Future appears nowhere in its signature.

For what it's worth, the internals thread does include an "explicit async explicit await" alternative, because that's future-compatible with either main alternative. (See the final section of the first post.)

did you try to communicate with C# LDM?

The author of the main RFC did. The main point that came out of it, as far I remember, was the decision not to include Future in the signature of async fns. In C#, you can replace Task with other types to have some control over how the function is driven. But in Rust, we don't (and won't) have any such mechanism- all futures will go through a single trait, so there's no need to write that trait out every time.

We also communicated with the Dart language designers, and that was a large part of my motivation for writing up the "explicit async" proposal. Dart 1 had a problem because functions didn't run to their first await when called (not quite the same as how Rust works, but similar), and that caused such massive confusion that in Dart 2 they changed so functions do run to their first await when called. Rust can't do that for other reasons, but it could run the entire function when called, which would also avoid that confusion.

We also communicated with the Dart language designers, and that was a large part of my motivation for writing up the "explicit async" proposal. Dart 1 had a problem because functions didn't run to their first await when called (not quite the same as how Rust works, but similar), and that caused such massive confusion that in Dart 2 they changed so functions do run to their first await when called. Rust can't do that for other reasons, but it could run the entire function when called, which would also avoid that confusion.

Great experience, I wasn't aware of it. Nice to hear you've done such a massive work. Well done 👍

I was also hitted by a thought tonight: did you try to communicate with C# LDM? For example, guys like @HaloFour , @gafter or @CyrusNajmabadi . It may be really good idea to ask them why they took syntax they took.

I'm happy to provide any info you're interested in. However, and i'm only skimmed through it. Would it be possible to condense down any specific questions you currently have?

Regarding await syntax (this might be completely stupid, feel free to shout at me; I am an async programming noob and I have no idea what I am talking about):

Instead of using the word "await", can we not introduce a symbol/operator, similar to ?. For example, it could be # or @ or something else that is currently unused.

For example, if it were a postfix operator:

let stuff = func()#?;
let chain = blah1()?.blah2()#.blah3()#?;

It is very concise and reads naturally from left to right: await first (#), then handle errors (?). It doesn't have the problem that the postfix await keyword has, where .await looks like a struct member. # is clearly an operator.

I am not sure if postfix is the right place for it to be, but it felt that way because of precedence. As prefix:

let stuff = #func()?;

Or heck even:

let stuff = func#()?; // :-D :-D

Has this ever been discussed?

(I realise this kinda starts to approach the "random keyboard mash of symbols" syntax that Perl is infamous for ... :-D )

@rayvector https://github.com/rust-lang/rust/issues/50547#issuecomment-388108875 , 5th alternative.

@CyrusNajmabadi thank you for coming. The main question is what option from listed ones you think fits better the current Rust language as it is, or maybe there is some other alternative? This topic isn't really long so you can easily scroll it top down quickly. The main question: should Rust follow current C#/TS/... await way or maybe it should implement its own. Is current syntax some kind of "legacy" that you would like to change in some way or it fits C# the best and it's the best option for newcoming languages as well?

The main consideration against C# syntax is operator precedence await foo? should await first and then evaluate ? operator as well as difference that unlike C# execution doesn't run in caller thread until first await, but doesn't start at all, the samy way current code snippet doesn't run negativity checks until GetEnumerator is called first time:

IEnumerable<int> GetInts(int n)
{
   if (n < 0)
      throw new InvalidArgumentException(nameof(n));
   for (int i = 0; i <= n; i++)
      yield return i;
}

More detailed in my first comment and later discussion.

@Pzixel Oh, I guess I missed that one when I was skimming through this thread earlier ...

In any case, I haven't seen much discussion about this, other than that brief mention.

Are there any good arguments for/against?

@rayvector I argued a little here in favour of more verbose syntax. One of the reasons is the one that you mention:

the "random keyboard mash of symbols" syntax that Perl is infamous for

To clarify, I don't think await!(f)? is really in the running for the final syntax, it was chosen specifically because its a solid way of not committing to any particular choice. Here are syntaxes (including the ? operator) that I think are still "in the running":

  • await f?
  • await? f
  • await { f }?
  • await(f)?
  • (await f)?
  • f.await?

Or possibly some combination of these. The point is that several of them do contain braces to be clearer about precedence & there are a lot of options here - but the intention is that await will be a keyword operator, not a macro, in the final version (barring some major change like what rpjohnst has proposed).

I vote for either a simple postfix await operator (e.g. ~) or the keyword with no parens and highest precedence.

I've been reading through this thread, and I would like to propose the following:

  • await f? evaluates the ? operator first, and then awaits the resultant future.
  • (await f)? awaits the future first, and then evaluates the ? operator against the result (due to ordinary Rust operator precedence)
  • await? f is available as syntactic sugar for `(await f)?. I believe "future returning a result" will be a super common case, so a dedicated syntax makes a lot of sense.

I agree with other commenters that await should be explicit. It's pretty painless doing this in JavaScript, and I really appreciate the explicitness and readability of Rust code, and I feel like making async implicit would ruin this for async code.

It occured to me that "implicit async block" ought to be implementable as a proc_macro, which simply inserts an await keyword before any future.

The main question is what option from listed ones you think fits better the current Rust language as it is,

Asking a C# designer what best fits the rust language is... interesting :)

I don't feel qualified to make such a determination. I like rust and dabble with it. But it's not a language i'm using day in and day out. Nor have I deeply ingrained it in my psyche. As such, i don't think i'm qualified to to make any claims about what are the appropriate choices for this language here. Want to ask me about Go/TypeScript/C#/VB/C++. Sure, i'd feel much more comfortable. But rust is too much out of my realm of expertise to feel comfortable with any such thoughts.

The main consideration against C# syntax is operator precedence await foo?

This is something i do feel like i can comment on. We thought about precedence a lot with 'await' and we tried out many forms before setting on the form we wanted. One of the core things we found was that for us, and the customers (internal and external) that wanted to use this feature, it was rarely the case that people really wanted to 'chain' anything past their async call. In other words, people seemed to strongly gravitate toward 'await' being the most important part of any full-expression, and thus having it be near the top. Note: by 'full expression' i mean things like the expression you get at the top of a expression-statement, or hte expression on the right of a top level assign, or the expression you pass as an 'argument' to something.

The tendency for people to want to 'continue on' with the 'await' inside an expr was rare. We do occasionally see things like (await expr).M(), but those seem less common and less desirable than the amount of people doing await expr.M().

This is also why we didn't go with any 'implicit' form for 'await'. In practice it was something people wanted to think very clearly about, and which they wanted front-and-center in their code so they could pay attention to it. Interestingly enough, even years later, this tendency has remained. i.e. sometimes we regret many years later that something is excessively verbose. Some features are good in that way early on, but once people are comfortable with it, are better suited with something terser. That has not been the case with 'await'. People still seem to really like the heavy-weight nature of that keyword and the precedence we picked.

So far, we've been very happy with the precedence choice for our audience. We might, in the future, make some changes here. But overall there is no strong pressure to do so.

--

as well as difference that unlike C# execution doesn't run in caller thread until first await, but doesn't start at all, the samy way current code snippet doesn't run negativity checks until GetEnumerator is called first time:

IMO, the way we did enumerators was somewhat of a mistake and has led to a bunch of confusion over the years. It's been especially bad because of the propensity for a lot of code to have to be written like this:

```c#
void SomeEnumerator(X args)
{
// Validate Args, do synchronous work.
return SomeEnumeratorImpl(args);
}

void SomeEnumeratorImpl(X args)
{
// ...
yield
// ...
}

People have to write this *all the time* because of the unexpected behavior that the iterator pattern has.  I think we were worried about expensive work happening initially.  However, in practice, that doesn't seem to happen, and people def think about the work as happening when the call happens, and the yields themselves happening when you actually finally start streaming the elements.

Linq (which is the poster child for this feature) needs to do this *everywhere*, this highly diminishing this choice.

For ```await``` i think things are *much* better.  We use 'async/await' a ton ourselves, and i don't think i've ever once said "man... i wish that it wasn't running the code synchronously up to the first 'await'".  It simply makes sense given what the feature is.  The feature is literally "run the code up to await points, then 'yield', then resume once the work you're yielding on completes".  it would be super weird to not have these semantics to me since it is precisely the 'awaits' that are dictating flow, so why would anything be different prior to hitting the first await.

Also... how do things then work if you have something like this:

```c#
async Task FooAsync()
{
    if (cond)
    {
        // only await in method
        await ...
    }
} 

You can totally call this method and never hit an await. if "execution doesn't run in caller thread until first await" what actually happens here?

await? f is available as syntactic sugar for `(await f)?. I believe "future returning a result" will be a super common case, so a dedicated syntax makes a lot of sense.

This resonates the most with me. It allows 'await' to be the topmost concept, but also allows simple handling of Result types.

One thing we know from C# is that people's intuition around precedence is tied to whitespace. So if you have "await x?" then it immediately feels like await has less precedence than ? because the ? abuts the expression. If the above actually parsed as (await x)? that would be surprising to our audience.

Parsing it as await (x?) would feel the most natural just from the syntax, and would fit the need of getting a 'Result' of a future/task back, and wanting to 'await' that if you actually received a value. If that then returned a Result back itself, it feels appropraite to have that combined with the 'await' to signal that it happens afterwards. so await? x? each ? binds tightly to the portion of the code it most naturally relates to. The first ? relates to the await (and specifically the result of it), and the second relates to the x.

if "execution doesn't run in caller thread until first await" what actually happens here?

Nothing happens until the caller awaits the return value of FooAsync, at which point FooAsync's body runs until either an await or it returns.

It works this way because Rust Futures are poll-driven, stack-allocated, and immovable after the first call to poll. The caller must have a chance to move them into place--on the heap for top-level Futures, or else by-value inside a parent Future, often on the "stack frame" of a calling async fn--before any code is executed.

This means we're stuck with either a) C# generator-like semantics, where no code runs at invocation, or b) Kotlin coroutine-like semantics, where calling the function also immediately and implicitly awaits it (with closure-like async { .. } blocks for when you do need concurrent execution).

I kind of favor the latter, because it avoids the problem you mention with C# generators, and also avoids the operator precedence question entirely.

@CyrusNajmabadi In Rust, Future usually does no work until it is spawned as a Task (it's much more similar to F# Async):

let bar = foo();

In this case foo() returns a Future, but it probably doesn't actually do anything. You have to manually spawn it (which is also similar to F# Async):

tokio::run(bar);

When it is spawned, it will then run the Future. Since this is the default behavior of Future, it would be more consistent for async/await in Rust to not run any code until it is spawned.

Obviously the situation is different in C#, because in C# when you call foo() it immediately starts running the Task, so it makes sense in C# to run code until the first await.

Also... how do things then work if you have something like this [...] You can totally call this method and never hit an await. if "execution doesn't run in caller thread until first await" what actually happens here?

If you call FooAsync() then it does nothing, no code is run. Then when you spawn it, it will run the code synchronously, the await will never run, and so it immediately returns () (which is Rust's version of void)

In other words, it's not "execution doesn't run in caller thread until first await", it's "execution doesn't run until it is explicitly spawned (such as with tokio::run)"

Nothing happens until the caller awaits the return value of FooAsync, at which point FooAsync's body runs until either an await or it returns.

Ick. That seems unfortunate. There are many times i may not ever get around to awaiting something (often due to cancellation and composition with tasks). As a dev i'd still appreciate getting early errors those (which is one of hte most common reasons people want execution to run up to the await).

This means we're stuck with either a) C# generator-like semantics, where no code runs at invocation, or b) Kotlin coroutine-like semantics, where calling the function also immediately and implicitly awaits it (with closure-like async { .. } blocks for when you do need concurrent execution).

Given these, i'd far prefer the former than the latter. Just my personal pref though. If the kotlin approach feels more natural for your domain, then go for that!

@CyrusNajmabadi Ick. That seems unfortunate. There are many times i may not ever get around to awaiting something (often due to cancellation and composition with tasks). As a dev i'd still appreciate getting early errors those (which is one of hte most common reasons people want execution to run up to the await).

I feel the exact opposite. In my experience with JavaScript it is very common to forget to use await. In that case the Promise will still run, but the errors will be swallowed (or other weird stuff happens).

With the Rust/Haskell/F# style, either the Future runs (with correct error handling), or it doesn't run at all. Then you notice that it isn't running, so you investigate and fix it. I believe this results in more robust code.

@Pauan @rpjohnst Thanks for the explanations. Those were approaches we considered as well. But it turned out to not actually be that desirable in practice.

In the cases where you didn't want it to "actually do anything. You have to manually spawn it", we found it cleaner to model that as returning something that generated tasks on demand. i.e. something as simple as Func<Task>.

I feel the exact opposite. In my experience with JavaScript it is very common to forget to use await.

C# does work to try to ensure that you either awaited, or otherwise used the task sensibly.

but the errors will be swallowed

That's the opposite of what i'm saying. I'm saying i want the code to execute eagerly so that errors are things i hit immediately, even in the event that i don't ever end up getting around to executing the code in the task. This is hte same with iterators. I'd much rather know i was creating it incorrect at the point in time whne i call the function versus potentially much further down the line if/when the iterator is streamed.

Then you notice that it isn't running, so you investigate and fix it.

In the scenarios i'm talking aobut, "not running" is completely reasonable. After all, my application may decide at any point that it doesn't need to actually run the task. That's not the bug that i'm describing. The bug i'm describing is that i didn't pass validation, and i want to find out about that as close to the point where i logically created the work as opposed to the point when the work actually needs to run. Given that these are models to describe async processing, it's often goign to be hte case that these are far away from each other. So having the information about issues happen as early as possible is valuable.

As mentioned, this is not hypothetical either. A similar thing happens with streams/iterators. People often create them, but then don't realize them until later. It's been an extra burden for people to have to track these things back to their source. This is why so many APIs (including hte BCL) now have to do the split between the synchronous/early work, and the actual deferred/lazy work.

That's the opposite of what i'm saying. I'm saying i want the code to execute eagerly so that errors are things i hit immediately, even in the event that i don't ever end up getting around to executing the code in the task.

I can understand the desire for early errors, but I'm confused: under what situation would you ever "end up not getting around to spawning the Future"?

The way that Futures work in Rust is that you compose Futures together in various ways (including async/await, including parallel combinators, etc.), and by doing this it builds up a single fused Future which contains all the sub-Futures. And then at the top-level of your program (main) you then use tokio::run (or similar) to spawn it.

Aside from that single tokio::run call in main, you usually won't be spawning Futures manually, instead you just compose them. And the composition naturally handles spawning/error handling/cancellation/etc. correctly.

i also want to make somethign clear. When i say something like:

But it turned out to not actually be that desirable in practice.

I'm talking very specifically about things with our language/platform. I can only give insight into the decisions that made sense for C#/.Net/CoreFx etc. It may be completely the case that your situation is different and what you want to optimize for and the types of approaches you should take go in an entirely different direction.

I can understand the desire for early errors, but I'm confused: under what situation would you ever "end up not getting around to spawning the Future"?

All the time :)

Consider how Roslyn (the C#/VB compiler/IDE codebase) is itself written. It is heavily async and interactive. i.e. the primary use case for it is to be used in a shared fashion with many clients accessing it. Cliest services are common interacting with the user over a wealth of features, many of which many decide that they no longer need to do work they originally thought was important, due to the user doing any number of actions. For example, as the user is typing, we're doing tons of task compositions and manipulations, and we may end up deciding to not even get around to executing them because another event came in a few ms later.

For example, as the user is typing, we're doing tons of task compositions and manipulations, and we may end up deciding to not even get around to executing them because another event came in a few ms later.

Isn't that just handled by cancellation, though?

And the composition naturally handles spawning/error handling/cancellation/etc. correctly.

It simply sounds like we have two very different models to represent things. That's fine :) My explanations are meant to be taken in the context of the model we choose. They may not make sense for the model you are choosing.

It simply sounds like we have two very different models to represent things. That's fine :) My explanations are meant to be taken in the context of the model we choose. They may not make sense for the model you are choosing.

Absolutely, I'm just trying to understand your perspective, and also explaining our perspective. Thank you for taking the time to explain things.

Isn't that just handled by cancellation, though?

Cancellation is an orthogonal concept to asynchrony (for us). They're commonly used together. But neither necessitates the other.

You could have a system entirely without cancellation, and it may simply be hte case that you just never get around to running the code that 'awaits' the tasks that you've composed. i.e. for logical reason your code may just go "i don't need to await 't', i'm just going to do something else". Nothing about tasks (in our world) dictates or necessitates that it should be expected that that task be awaited. In such a system, i would want to get early validation.

Note: this is similar to the iterator problem. You may call someone to get results you intend to use later on in your code. However, for any number of reasons, you may not end up actually having to use the results. My personal desire would still be to get the validation results early, even if i technically could have not gotten them and had my program succeed.

I think there are reasonable arguments for both directions. But my take is that the synchronous approach has had more pros than cons. Of course, if hte synchronous approach literally does not fit due to how your actual impl wants to work then that seems to answer the question on what you need to do :D

In other words, i don't think your approach is bad here. And if it has strong benefits around this model you think is right for Rust, then def go for it :)

You could have a system entirely without cancellation, and it may simply be hte case that you just never get around to running the code that 'awaits' the tasks that you've composed. i.e. for logical reason your code may just go "i don't need to await 't', i'm just going to do something else".

Personally, I think that's best handled by the usual if/then/else logic:

async fn foo() {
    if some_condition {
        await!(bar());
    }
}

But as you say, it's just a very different perspective from C#.

Personally, I think that's best handled by the usual if/then/else logic:

Yes. that would be fine if the checking of the condition could be done at the same point the task is created (and tons of cases are like this). But in our world it's commonly not the case that things are so well connected like that. After all, we want to eagerly do async work in response to users (so that the results are ready when needed), but we may later on decide we don't care anymore.

In our domains the 'await' happens at the point the person "needs the value", which is a different determination/component/etc. from the decision about "should i start working on the value?"

In a sense, these are very decoupled, and that's viewed as a virtue. The producer and consumer can have entirely different policies, but can communicate effectively about the async work being done through the nice abstraction of the 'Task'.

Anways, i'll back out of the sync/async opinion. Clearly there are very different models at play here. :)

In terms of precedence i've given some information on how C# thinks about things. I hope it is helpful. Let me know if you want any more information there.

@CyrusNajmabadi Yes, your insights were quite helpful. Personally I agree with you that await? foo is the way to go (though I also like the "explicit async" proposal as well).

BTW, if you want one of the best expert opinions on all the intricacies of the .net model around modeling async/sync work, and all the pros/cons of that system, then @stephentoub would be the person to talk to. He would be about 100x better than me at explaining things, clarifying the pros/cons, and likely being able to dive deep into the models on both sides. He's intimately familiar with .net's approach here (including the choices made and the choices rejected), and what how it has had to evolve since the beginning. He's also painfully aware of the perf costs of the approaches .net has taken (which is one of hte reason ValueTask now exists), which i imagine would be something you guys are thinking about first-and-foremost with your desire for zero/low-cost abstractions.

From my recollection, similar thoughts about these splits were put into .net's approach in the early days, and i think he could speak very well to the ultimate decisions that were made and how appropriate they've been.

I'd still vote in favor of await? future even if it looks a bit unfamiliar. Are there any real downsides in composing those?

Here's another thorough analysis of the pros and cons of cold (F#) vs hot (C#,JS) asyncs: http://tomasp.net/blog/async-csharp-differences.aspx

There now is a new RFC for postfix macros that would allow experimentation with postfix await without a dedicated syntax change: https://github.com/rust-lang/rfcs/pull/2442

await {} is my favorite one here, reminiscent of unsafe {} plus it shows precedence.

let value = await { future }?;

@seunlanlege
yes, it's remeniscent, so people have a false assumption they can write code like this

let value = await {
   let val1 = future1;
   future2(val1)
}

But they can't.

@Pzixel
if i understand you correctly, you're assuming people would assume that futures are implicitly awaited on inside an await {} block? I disagree with that. await {} would only await on the expression the block evaluates to.

let value = await {
    let future = create_future();
    future
};

And it should be a pattern that is discouraged

simplified

let value = await { create_future() };

You propose a statement where more than one expression "should be discouraged". Don't you see anything wrong with it?

Is it favorable to make await a pattern (aside with ref etc)?
Something like:

let await n = bar();

I prefer to call that an async pattern than await, although I don't see much advantage of making it a pattern syntax. Pattern syntaxes generally work dually with respect to their expression counterparts.

According to current page of https://doc.rust-lang.org/nightly/std/task/index.html, the task mod consists of both reexports from libcore and reexports for liballoc, which makes the result a little ... suboptimal. Hope this is addressed somehow before it stablizes.

I took a look at the code. And I have a few suggestions:

  • [x] The UnsafePoll trait and Poll enum have very similar names, but they are not related. I suggest to rename UnsafePoll, e.g. to UnsafeTask.
  • [x] In the futures crate the code was split up into different submodules. Now, most code is bunched together in task.rs which makes it harder to navigate. I suggest splitting it up again.
  • [x] TaskObj#from_poll_task() has an odd name. I suggest naming it new() instead
  • [x] TaskObj#poll_task could just be poll(). The field called poll could be called poll_fn which would also suggest that it's a function pointer
  • Waker might be able to use the same strategy as TaskObj and put the vtable on the stack. Just an idea, I don't know whether we want this. Would it be faster because it's a little less indirection?
  • [ ] dyn is now stable in beta. The code should probably use dyn where it applies

I can provide a PR for this stuff as well. @cramertj @aturon feel free to reach out to me via Discord to discuss the details.

how about just add an method await() for all Future?

    /// just like and_then method
    let x = f.and_then(....);
    let x = f.await();

    await f?     =>   f()?.await()
    await? f     =>   f().await()?

/// with chain invoke.
let x = first().await().second().await()?.third().await()?
let x = first().await()?.second().await()?.third().await()?
let x = first()?.await()?.second().await()?.third().await()?

@zengsai The problem is that await doesn't work like a regular method. In fact, do consider what await method would do when not in an async block/function. Methods don't know in what context they are executed, so it couldn't cause compilation error.

@xfix this is not true in general. The compiler can do whatever it wants to and could handle the method call specially in this case. The method style call solves the preference issue but it is unexpected (await does not work this way in other languages) and would probably be an ugly hack in the compiler.

@elszben That the compiler can do whatever it wants doesn't mean it should do whatever it wants.

future.await() sounds like a regular function call, while it is not. If you want to go this way, the future.await!() syntax proposed somewhere above would allow the same semantics, and clearly mark with a macro “Something weird is going on here, I know.”

Edit: Post removed

I moved this post into the futures RFC. Link

Has anyone looked at the interaction between async fn and #[must_use]?

If you have an async fn, calling it directly runs no code and returns a Future; it seems like all async fn should have an inherent #[must_use] on the "outer" impl Future type, so you can't call them without doing something with the Future.

On top of that, if you attach a #[must_use] to the async fn yourself, it seems like that should apply to the inner function's return. So, if you write #[must_use] async fn foo() -> T { ... }, then you can't write await!(foo()) without doing something with the result of the await.

Has anyone looked at the interaction between async fn and #[must_use]?

For others interested in this discussion, see https://github.com/rust-lang/rust/issues/51560.

I was thinking about how asynchronous functions are implemented and realized that these functions don't support recursion, or mutual recursion either.

for the await syntax, I am personally toward the post-fix macros, no implicit await approach, for its easy chaining, and that it can also be used sort of like a method call

@warlord500 you are completely ignoring the entire expirience of millions of developers described above. You don't want to chain await's.

@Pzixel please don't presume I haven't read the thread or what I want.
I know that some contributor might not want to allow chaining awaits but there are some of us
developers who do. I am not sure where you even got the notion that I was ignoring
developers opinions, my comment only was specifying an opinion of member of the community and my reasons for holding that opinion.

EDIT: if you have a difference of opinion then please do share! I am curious as to why you say
we shouldn't allow chaining awaits via a method like syntax?

@warlord500 because MS team shared its experience across thousand of customers and millions of developers. I know it myself because I write async/await code on day-to-day basis, and you never want to chain them. Here is exact quote, if you wish:

We thought about precedence a lot with 'await' and we tried out many forms before setting on the form we wanted. One of the core things we found was that for us, and the customers (internal and external) that wanted to use this feature, it was rarely the case that people really wanted to 'chain' anything past their async call. In other words, people seemed to strongly gravitate toward 'await' being the most important part of any full-expression, and thus having it be near the top. Note: by 'full expression' i mean things like the expression you get at the top of a expression-statement, or hte expression on the right of a top level assign, or the expression you pass as an 'argument' to something.

The tendency for people to want to 'continue on' with the 'await' inside an expr was rare. We do occasionally see things like (await expr).M(), but those seem less common and less desirable than the amount of people doing await expr.M().

I am now quite confused, If I understand you correctly, we shouldn't support
await post-fix easy chain style because it isn't commonly used? you see await as being the most important part of an expression.
I only presume in this case to make sure I understand you correctly.
If I am wrong don't hesitate to correct me.

also, you could please post the link to where you got the quote,
thank you.

my counter to the two above points are just because you don't use something commonly, doesn't necessarily mean supporting it would be harmful for the case where it makes code cleaner.

sometimes await inst the most important part of an expression, if the future generating expression is
the most important part and you would like to put it toward the top, you can still do that if we allowed a postfix macro style in addition to normal macro style

also, you could please post the link to where you got the quote,
thank you.

But... but you said that you have read the whole thread... 😃

But I have no problem with sharing it: https://github.com/rust-lang/rust/issues/50547#issuecomment-388939886 . I suggest you to read all Cyrus posts, it's really experience of the whole C#/.Net ecosystem, it's a priceless experience that can be reused by Rust.

sometimes await inst the most important part of an expression

The quote is clearly says the opposite 😄 And you know, I have the same feeling myself, writing async/await on day-to-day basis.

Do you have an experience with async/await? Can you share it then, please?

Wow, I cant I believe I missed that. Thank you for taking time out of your day to link that.
I dont have any experience so, I guess in the grand scheme things my opinion doesn't matter all that much

@Pzixel I appreciate you sharing information about your and others' experience using async /await, but please be respectful to other contributors. You don't need to criticize the experience levels of others in order to make your own technical points heard.

Moderator note: @Pzixel Personal attacks on community members are not allowed. I've edited it out of your comment. Do not do it again. If you have questions about our moderation policy, please follow up with us at [email protected].

@crabtw I didn't criticize anyone itt. I apologize for any inconvenience that could have place here.

I asked about expirience once when I wanted to understand if person have an actual need in chaining 'await's or it's his extrapolation of today features. I didn't want to appeal to authority, it just a useful bunch of information where I can say "you need to try it yourself and realize this truth yourself". Nothing offensive here.

Personal attacks on community members are not allowed. I've edited it out of your comment.

No personal attacks. As I can see you commented out my reference about downvotes. Well, it was just my reactor on my post downvote, nothing special. As it was removed, it's also reasonable to remove that reference (it may be even confusing for further readers), so thank you for removing that out.

Thanks for the reference. I did want to mention that you shoudl take none of what i say as 'gospel' :) Rust and C# are different languages with different communities, paradigms and idioms. You should def make the best choices for your language. I do hope my words are helpful and can give insight. But always be open to different ways to do things.

My hope is you come up with something amazing for Rust. Then we can see what you did and stealgraciously adopt it for C# :)

As far as I can tell, the linked argument primarily talks about the precedence of await, and in particular argues that it makes sense to parse await x.y() as await (x.y()) rather than (await x).y() because the user will more often want and expect the former interpretation (and the spacing also suggests that interpretation). And I would tend to agree, though I'd also observe that syntax like await!(x.y()) removes the ambiguity.

However, I don't think that suggests any particular answer regarding the value of chaining like x.y().await!().z().

The quoted comment is interesting in part because there's a big difference in Rust, which has been one of the big factors in delaying our figuring out the final await syntax: C# has no ? operator, so they have no code that would need to be written (await expr)?. They describe (await expr).M() as really uncommon, and I tend to think that would be true in Rust as well, but the only exception to that, from my perspective, is ?, which will be very common because many futures will evaluate to results (all of them that exist right now do, for example).

@withoutboats yes, that's right. I'd like to quote this part once more:

the only exception to that, from my perspective, is ?

If there is only exception then it seems reasonable to create await? foo as a shortcut for (await foo)? and having the best of both worlds.

Right now, at least, the proposed syntax of await!() will allow unambiguous use of ?. We can worry about some shorter syntax for the combination of await and ? if and when we decide to change the base syntax for await. (And depending on what we change it to, we might not have an issue at all.)

@joshtriplett these extra braces removes ambiguity, but they are really very heavy. E.g. search across my current project:

Matching lines: 139 Matching files: 10 Total files searched: 77

I have 139 awaits in 2743 sloc. Maybe it's not a big deal, but I think we should consider braceless alternative as cleaner and better one. Being said, ? is the only exception, so we could easily use await foo without braces, and introduce a special syntax just for this special case. It's not a big deal, but could save some braces for a LISP project.

I've created a blog post about why I think async functions should use the outer return type approach for their signature. Enjoy reading!

https://github.com/MajorBreakfast/rust-blog/blob/master/posts/2018-06-19-outer-return-type-approach.md

I haven't followed all the discussions, so feel free to point me to where this would already have been discussed if I missed it.

Here is an additional concern about the inner return type approach: how would the syntax for Streams look like, when it'll be specified? I would think async fn foo() -> impl Stream<Item = T> would look nice and consistent with async fn foo() -> impl Future<Output = T>, but it wouldn't work with the inner return type approach. And I don't think we'll want to introduce an async_stream keyword.

@Ekleog Stream would need to use a different keyword. It can't use async because impl Trait works the other way around. It can only ensure that certain traits are implemented, but the traits themselves need to be already implemented on the underlying concrete type.

The outer return type approach would, however, come in handy if we would one day like to add async generator functions:

async_gen fn foo() -> impl AsyncGenerator<Yield = i32, Return = ()> { yield 1; ... }

Stream could be implemented for all async generators with Return = (). This makes this possible:

async_gen fn foo() -> impl Stream<Item = i32> { yield 1;  ... }

Note: Generators are in nightly already, but they don't use this syntax. Currently they use the closure syntax without a marker. The are also currently not pinning-aware unlike Stream in futures 0.3.

Edit: This code previously used a Generator. I missed a difference between Stream and Generator. Streams are asynchronous. This means that they may but don't have to yield a value. They can either respond with Poll::Ready or Poll::Pending. A Generator on the other hand has to always yield or complete synchronously. I have now changed it to AsyncGenerator to reflect this.

Edit2: @Ekleog The current implementation of generators uses a syntax without marker and seems to detect that it should be a generator by looking for a yield inside the body. This means that you would be correct in saying that async could be reused. Whether that approach is sensible is another question, though. But I guess that's for another topic ^^'

Indeed, I was thinking that async could be reused, would it only be because async would, as per this RFC, only be allowed with Futures, and thus could detect it's generating a Stream by looking at the return type (which must be either a Future or a Stream).

The reason why I'm raising this now is because if we want to have the same async keyword for generating both Futures and Streams, then I think the outer return type approach would be much cleaner, because it would be explicit, and I don't think anyone would expect that a async fn foo() -> i32 would yield a stream of i32 (which would be possible if the body contained a yield and the inner return type approach was picked).

We could have a second keyword for generators (e.g. gen fn), and then create streams just by applying both (e.g. async gen fn). The outer return type doesn't need to come into this at all.

@rpjohnst I brought it up because the outer return type approach makes it possible to easily set two associated types.

We don't want to set two associated types. A Stream is still just a single type, not impl Iterator<Item=impl Future>> or anything like that.

@rpjohnst I meant the associated types Yield and Return of (async) generators

gen fn foo() -> impl Generator<Yield = i32, Return = ()> { ... }

This was my original sketch, but I think talking about generators is getting too far ahead of ourselves, at least for the tracking issue:

// generator
fn foo() -> T yields Y

// generator that implements Iterator
fn foo() yields Y

// async generator
async fn foo() -> T yields Y

// async generator that implements Stream
async fn foo() yields Y

More generally, I think we should have more experience with the implementation before we revisit any decisions made in the RFC. We're circling around the same arguments that we've already made, we need experience with the feature as proposed by the RFC to see if a re-weighting is needed.

I'd like to fully agree with you, but just wonder: if I read correctly your comment, stabilization of the async/await syntax will wait for a decent syntax and implementation for async streams, and to gain experience with the two of them? (as it wouldn't be possible to change between outer return types and inner return types once it's stabilized)

I thought async/await was expected for Rust 2018 and wouldn't hope for async generators to be ready by then, but…?

(Also, my comment was intended only as an additional argument to @MajorBreakfast's blog post, yet it appears to have completely erased discussion on this topic… that was not at all my objective, and I guess the debate should re-center on this blog post?)

The narrow use case of the await keyword still confuses me. (Esp. Future vs Stream vs Generator)

Woudln't a yield keyword be sufficient for all use cases? As in

{ let a = yield future; println(a) } -> Future

Which keeps the return type explicit and therefore only one keyword is needed for all "continuation" based semantics without fusing keyword and library together too tightly.

(We did this in the clay language btw)

@aep await doesn't yield a future from the generator-- it pauses the execution of the Future and returns control to the caller.

@cramertj well it could have done that exactly that tho ( return a future which contains the continuation after the yield keyword), which is a much broader use case.
but i guess i'm a little late to the party for that discussion ? :)

@aep The reasoning for an await-specific keyword is for composability with a future generator-specific yield keyword. We want to support async generators and that means two independent continuation "scopes."

Also, it can't return a future which contains the continuation, because Rust futures are poll-based not callback-based, at least partially for memory management reasons. Much easier for poll to mutate a single object than for yield to throw around references to it.

I think async/await should not be a keyword cause of pollute the language itself, because async just a feature not the language's internal.

@sackery It is part of the language's internals, and cannot be implemented purely as a library.

so just make it as keyword just like nim, c#'s does!

Question: what should the signature of async non-move closures that capture values by mutable reference be? Currently they're just banned outright. It seems like we want some sort of GAT approach that would allow the borrow of the closure to last until the future is dead, e.g.:

trait AsyncFnMut {
    type Output<'a>: Future;
    fn call(&'a mut self, args: ...) -> Self::Output<'a>;
}

@cramertj there's a general problem here with returning mutable references to the captured environment of a closure. Possibly the solution need not be tied to async fn?

@withoutboats right, it's just going to be much more common in async situations than it would probably be elsewhere.

How about fn async instead of async fn?
I like let mut better than mut let.

fn foo1() {
}
fn async foo2() {
}
pub fn foo3() {
}
pub fn async foo4() {
}

Once you search pub fn, you can still find all of public function in source code.
but currently syntax is not.

fn foo1() {
}
async fn foo2() {
}
pub fn foo3() {
}
pub async fn foo4() {
}

This proposal is not very important, It's a matter of personal taste.
So I respect the opinion to you all :)

I believe all modifiers should go before fn. It's clear and it how it's done in other languages. It's just a common sense.

@Pzixel I know that access modifiers should go before fn because it's important.
but I think async is probably not.

@xmeta I haven't seen this idea proposed before. We probably want to put async in front of fn to be consistent, but I think it's important to consider all options. Thanks for posting!

// Status quo:
pub unsafe async fn foo() {} // #![feature(async_await, futures_api)]
pub const unsafe fn foo2() {} // #![feature(const_fn)]

@MajorBreakfast Thank you for your reply,I thought like this.

{ Public, Private } ⊇ Function  → put `pub` in front of `fn`
{ Public, Private } ⊇ Struct    → put `pub` in front of `struct`
{ Public, Private } ⊇ Trait     → put `pub` in front of `trait`
{ Public, Private } ⊇ Enum      → put `pub` in front of `enum`
Function ⊇ {Async, Sync}        → put `async` in back of `fn`
Variable ⊇ {Mutable, Imutable}  → put `mut` in back of `let`

@xmeta @MajorBreakfast

async fn is indivisible, It represents an asynchronous function。

async fn is a whole.

You search pub fn,that means you're searching for public synchronization function.
In the same way, you search pub async fn,that means you're searching for public asynchronous function.

@ZhangHanDong

  • async fn defines a normal function that returns a future. All functions that return a future are considered "asynchronous". The function pointers of async fns and other functions that return a future are identical°. Here's a playground example. A search for "async fn" can only find the functions that use the notation, it won't find all asynchronous functions.
  • A search for pub fn won't find unsafe or const functions.

° The concrete type returned by an async fn is of course anonymous. I mean that they both return a type that implements Future

@xmeta note that mut does not "go after let", or rather, that mut does not modify let. let takes a pattern, that is

let PATTERN = EXPRESSION;

mut is part of the PATTERN, not of the let itself. For example:

// one is mutable one is not
let (mut a, b) = (1, 2);

@steveklabnik I understand. I just wanted to show the association between hierarchical structure and word order. Thank you

What are people's thoughts on the desired behavior of return and break inside of async blocks? Currently return returns from the async block-- if we allow return at all, this is really the only possible option. We could outright ban return and use something like 'label: async { .... break 'label x; } to return from an async block. This also ties into the conversation around whether to use the keyword break or return for the break-to-blocks feature (https://github.com/rust-lang/rust/issues/48594).

I am for allowing return. The main concern to forbid it is that it could be confusing because it doesn't return from the current function, but from the async block. I, however, doubt that it will be confusing. Closures allow return already and I never found it to be confusing. Learning that return applies to async blocks as well is IMO easy and allowing it is IMO quite valuable.

@cramertj return should always exit the containing function, never an inner block; if it doesn't make sense for that to work, which it sounds like it doesn't, then return should not work at all.

Using break for this seems unfortunate, but given that we unfortunately have label-break-value, then it's at least consistent with that.

Are async moves and closures still planned? The following is from the RFC:

// closure which is evaluated immediately
async move {
     // asynchronous portion of the function
}

and further down the page

async { /* body */ }

// is equivalent to

(async || { /* body */ })()

which makes return aligned with closures, and seems quite easy to pick-up and explain.

Is the break-to-block rfc planning on allowing jumping out of an inner closure with a label? If not (and I'm not suggesting it should allow it), it would be very unfortunate to disallow returns's consistent behavior, then use an alternative that is also inconsistent with break-to-blocks's rfc.

@memoryruins async || { ... return x; ... } should absolutely work. I'm saying that async { ... return x; ... } shouldn't, precisely because async is not a closure. return has a very specific meaning: "return from the containing function". Closures are a function. async blocks are not.

@memoryruins Both of those are already implemented.

@joshtriplett

async blocks are not.

I guess I still think about them as functions in the sense that they're a body with a separately defined execution context from the block that contains them, so it makes sense to me that return is internal to the async block. The confusion here seems mostly syntactic, in that blocks are usually just wrappers for an expression rather than things that bring code into a new execution context like || and async do.

@cramertj "syntactic" is important, though.

Think about it this way. If you have something that doesn't look like a function (or like a closure, and you're used to recognizing closures as functions), and you see a return, where does your mental parser think it goes?

Anything that hijacks return makes it more confusing to read someone else's code. People are at least used to the idea that break returns to some parent block and they'll have to read the context to know which block. return has always been the bigger hammer that returns from the whole function.

If they aren't being treated similarly to immediately evaluated closures, I agree that return would then be inconsistent, especially syntactically. If ?'s in async blocks has already been decided (the RFC still says it was undecided), then I imagine it would be aligned with that.

@joshtriplett it feels arbitrary to me to say that you can recognize functions and closures (which are syntactically very different) as "return scopes" but async blocks can't be recognized along the same lines. Why are two distinct syntactic forms acceptable, but not three?

There was some prior discussion of this topic on the RFC. As I said there I’m in favour of async blocks using break _without_ having to provide a label (there’s no way to break out of the async block to an outer loop so you don’t lose any expressivity).

@withoutboats A closure is just another kind of function; once you learn "a closure is a function" then you can apply everything you know about functions to closures, including "return always returns from the containing function".

@Nemo157 Even if you have unlabeled break target the async block, you'd have to provide a mechanism (like 'label: async) for returning early from a loop inside an async block.

@joshtriplett

A closure is just another kind of function; once you learn "a closure is a function" then you can apply everything you know about functions to closures, including "return always returns from the containing function".

I think async blocks are also a kind of "function"-- one with no arguments which can be run to completion asynchronously. They're a special-case of async closures that have no arguments and have been pre-applied.

@cramertj yep, I was assuming that any implicit break point can also be labelled if necessary (as I believe they all can currently).

Anything that makes control flow harder to follow, and in particular redefines what return means, puts a great deal of strain on the ability to smoothly read code.

Along the same lines, standard guidance in C is "don't write macros that return from the middle of the macro". Or, as a less common but still problematic case: if you write a macro that looks like a loop, break and continue should work from inside it. I've seen people write loop-ish macros that actually embed two loops, so break doesn't work as expected, and that's extremely confusing.

I think async blocks are also a kind of "function"

I think that's a perspective based on knowing the internals of the implementation.

I don't see them as functions at all.

I don't see them as functions at all.

@joshtriplett

My suspicion is that you would've made the same argument coming to a language with closures for the first time-- that return shouldn't work within the closure, but within the defining function. And indeed, there are languages that take this intepretation, like Scala.

@cramertj I would not, no; for lambdas and/or functions defined within a function, it feels completely natural that they're a function. (My first exposure to those was in Python, FWIW, where lambdas can't use return and in nested functions return returns from the function containing the return.)

I think that once one knows what an async block does, it is intuitively clear how return must behave. Once you know that it represents a delayed execution, it's clear that return cannot apply to the function. It's clear that function will have returned already by the time the block runs. IMO learning this shouldn't be much of a challenge. We should at least try it out and see.

This RFC does not propose how the ?-operator and control-flow constructs like return, break and continue should work inside async blocks.

Would it be best to disallow any control flow operators or postpone blocks until a dedicated RFC is written? There were other desired features mentioned to be discussed later. In the meantime, we will have async functions, closures, and await! :)

I agree with @memoryruins here, I think it would be worth creating another RFC to discuss those specifics in more detail.

What do you think about a function that lets us access the context from inside an async fn, maybe called core::task::context()? It would simply panic if called from outside an async fn. I think that would be quite handy, e.g. to access the executor to spawn something.

@MajorBreakfast that function is called lazy

async fn foo() -> i32 {
    await!(lazy(|ctx| {
        // do something with ctx
        42
    }))
}

For anything more specific like spawning there will likely be helper functions that make it more ergonomic

async fn foo() -> i32 {
    let some_task = lazy(|_| 5);
    let spawned_task = await!(spawn_with_handle(some_task));
    await!(spawned_task)
}

@Nemo157 Actually spawn_with_handle is where I'd like to use this. When converting the code to 0.3 I noticed that spawn_with_handle is actually only a future because it needs access to the context (see code). What I'd like to do is to add a spawn_with_handle method to ContextExt and make spawn_with_handle a free function that only works inside async functions:

fn poll(self: PinMut<Self>, cx: &mut Context) -> Poll<Self::Output> {
     let join_handle = ctx.spawn_with_handle(future);
     ...
}
async fn foo() {
   let join_handle = spawn_with_handle(future); // This would use this function internally
   await!(join_handle);
}

This would remove all the double await nonsense that we have currently have.

Come to think of it, the method would need to be called core::task::with_current_context() and work a bit differently because it must be impossible to store a reference.

Edit: This function already exists under the name get_task_cx. It is currently in libstd for technical reasons. I propose to make it public API once it can be put into libcore.

I doubt it will be possible to have a function that can be called from a non-async function that could give you the context from some parent async function once it's been moved out of TLS. At that point the context will likely be treated like a hidden local variable inside the async function, so you could have a macro that accesses the context directly in that function, but there would be no way to have spawn_with_handle magically pull the context out of its caller.

So, potentially something like

fn spawn_with_handle(executor: &mut Executor, future: impl Future) { ... }

async fn foo() {
    let join_handle = spawn_with_handle(async_context!().executor(), future);
    await!(join_handle);
}

@Nemo157 I think you're right: A function like I'm proposing likely couldn't work if it's not called directly from within the async fn. Maybe the best way is to make spawn_with_handle a macro that uses await internally (like select! and join!):

async fn foo() {
    let join_handle = spawn_with_handle!(future);
    await!(join_handle);
}

This looks nice and can be implemented easily via await!(lazy(|ctx| { ... })) inside the macro.

async_context!() is problematic because it can't prevent me from storing the context reference across await points.

async_context!() is problematic because it can't prevent me from storing the context reference across await points.

Depending on the implementation, it can. If full generator arguments are resurrected they would require to be limited such that you can't keep a reference across the yield point anyway, the value behind the argument would have a lifetime that only runs until the yield point. Async/await would just inherit that limitation.

@Nemo157 You mean something like this?

let my_arg = yield; // my_arg lives until next yield

@Pzixel Sorry to awake a _possibly_ old discussion, but I'd like to add my thoughts.

Yes, I do like that the await!() syntax removes ambiguity when combining it with things like ?, yet I do also agree that this syntax is annoying to type a thousand times in a single project. I also believe it's noisy, and clean code is important.

That's why I'm wondering what the real argument is against a suffixed symbol (which has been mentioned a few times before), such as something_async()@ compare to something with await, maybe because await is a well known keyword from other languages? The @ could be funny as it resembles an a from await, but it may be any symbol that would fit nicely.

I'd argue that such a syntax choice would be logical as something similar happend with try!() which basically became a suffixed ? (I know this is not exactly the same). It is concise, easy to remember and easy to type.

Another awesome thing about such syntax is that the behavior is immediately clear when combined with the ? symbol (at least I believe it would be). Take a look at the following:

// Await, then unwrap a Result from the future
awaiting_a_result()@?;

// Unwrap a future from a result, then await
result_with_future()?@;

// The real crazy can make it as funky as they want
magic()?@@?@??@; 
// - I'm joking, of course

This doesn't have the issue as await future? does where it isn't clear at first sight what will happen unless you know about such situation. And yet it's implementation is consistent with ?.

Now, there are just a few _minor_ things I can think of which would counter this idea:

  • maybe it's _too_ concise and less visible/verbose unlike something with await, which makes it _hard_ to spot suspension points in a function.
  • maybe it's asymmetrical with the async keyword, where one is a keyword and the other a symbol. Although, await!() suffers from the same problem which is a keywore versus a macro.
  • choosing a symbol adds yet another syntaxial element, and a thing to learn. But, assuming that this might become something commonly used I don't think this is a problem.

@phaux also mentioned the use of the ~ symbol. However I believe this character is funky to type on quite a few keyboard layouts therefore I'd recommend to drop that idea.

What are your thoughts guys? Do you agree it's somehow similar to how try!() _became_ ?? Do you prefer await or a symbol, and why? Am I crazy for discussing this, or maybe I'm missing something?

Sorry for any incorrect terminology I may have used.

The biggest worry I have with sigil-based syntax is that it can easily turn into glyph soup, as you kindly demonstrated. Anyone familiar with Perl (pre-6) will understand where I'm going with this. Avoiding line noise is the best way to move forward.

That said, maybe the best way to go is actually exactly like with try!? That is, start with an explicit async!(foo) macro, and if need arises, add some sigil that would be sugar for it. Sure, that's postponing the problem to later, but async!(foo) is way enough for a first iteration of async/await, with the advantage of being relatively uncontroversial. (and of having the precedent of try!/? should need arise of a sigil)

@withoutboats I haven't read this entire thread, but is anyone helping with the implementation? Where is your development branch?

And, regarding the remaining unresolved considerations, has anyone asked for help from experts outside the Rust community? Joe Duffy knows and cares a lot about concurrency and understands the fiddly details quite well, and he's given a keynote at RustConf, so I suspect he may be amenable to a request for guidance, if such a request is warranted.

@BatmanAoD an initial implementation was landed in https://github.com/rust-lang/rust/pull/51580

The original RFC thread had comments from a number of experts in the PLT space, outside of Rust even :)

I would like to suggest the '$' symbol for awaiting Futures, because time is money, and I want to remind the compiler of this.

Just joking. I don't think having a symbol for awaiting is a good idea. Rust is all about being explicit, and enabling people to write low level code in a powerful language that doesn't let you shoot yourself in the foot. A symbol is a lot more vague than an await! macro, and allows people to shoot themselves in the foot in a different way by writing difficult to read code. I would already argue that ? is a step too far.

I also disagree with there being an async keyword to be used in the form async fn. It implies some kind of "bias" towards async. Why does async deserve a keyword? Asynchronous code is just another abstraction for us which isn't always necessary. I think an async attribute acts more like an "extension" of the base Rust dialect which allows us to write more powerful code.

I am no language architect, but I have a bit of experience writing async code with Promises in JavaScript, and I think the way its done there makes async code a pleasure to write.

@steveklabnik Ah, okay, thank you. Can we ( / should I) update the issue description? Perhaps the bullet item "initial implementation" should be split into "implementation without move support" and "full implementation"?

Is the next implementation iteration being worked on in some public fork/branch? Or can that not even proceed until RFC 2418 is accepted?

Why is the async/await syntax issue being discussed here rather than in an RFC?

@c-edw I think the question about the async keyword is answered by What Color is Your Function?

@parasyte It has been suggested to me that that post is in fact an argument against the entire idea of async functions without automatically-managed green-thread style concurrency.

I disagree with this position, because green threads can't be implemented transparently without a (managed) runtime, and there's good reason for Rust to support asynchronous code without requiring that.

But it seems that your reading of the post is that the async/await semantics are fine, but there is a conclusion to be drawn about the keyword? Would you mind expanding upon that?

I agree with your point of view there, as well. I was commenting that the async keyword is necessary, and the article lays out the reasoning behind it. The conclusions drawn by the author are a different matter.

@parasyte Ah, okay. Glad I asked--because of the author's aversion to red/blue dichotomies, I thought you were saying the opposite!

I'd like to further clarify, since I feel I didn't quite do it justice.

The dichotomy is inescapable. Some projects have tried to erase it by making every function call async, enforcing that sync function calls do not exist. Midori is an obvious example. And other projects have tried to erase the dichotomy by hiding the async functions behind the facade of sync functions. gevent is an example of this kind.

Both have the same problem; they still need the dichotomy to distinguish between waiting for an asynchronous task to complete and starting a task asynchronously!

  • Midori introduced not only the await keyword, but also an async keyword on the function call site.
  • gevent provides gevent.spawn in addition to the implicit-await of normal-looking function calls.

That was the whole reason I brought up the color-a-function article, since it answers the question, "Why does async deserve a keyword?"

Well, even thread-based synchronous code can distinguish "waiting for a task to complete" (join) and "starting a task" (spawn). You could imagine a language where everything is async (implementation-wise), but there's no annotation on await (because it's the default behavior), and Midori's async is instead a closure passed to a spawn API. This puts all-async on the exact same syntactic/function-color footing as all-sync.

So while I agree async deserves a keyword, it seems to me that's more because Rust cares about the mechanism of implementation at this level, and needs to provide both colors for that reason.

@rpjohnst Yes, I’ve read your proposals. It’s conceptually the same as hiding the colors a la gevent. Which I criticized on the rust forum in the very same thread; every function call looks synchronous which is a particular hazard when a function is both synchronous and blocking in an async pipeline. This kind of bug is unpredictable and a real disaster to troubleshoot.

I'm not talking about my proposal in particular, I'm talking about a language where everything is asynchronous. You can escape the dichotomy that way- my proposal doesn't attempt that.

IIUC that’s exactly what Midori attempted. At that point, keywords vs closures is just arguing semantics.

On Thu, Jul 12, 2018 at 3:01 PM, Russell Johnston notifications@github.com
wrote:

You used the presence of keywords as your argument for why the dichotomy
still exists in Midori. If you remove them, where is the dichotomy? The
syntax is identical to all-sync code, but with the capabilities of async
code.

Because when you call an async function without awaiting its result, it
synchronously returns a promise. Which can be awaited later. 😐

_Wow, someone knows something about Midori? I was always thinking that it's a closed-project with near no live creature working on it. It would be interesting if someone of you guys have written about it in more details._

/offtopic

@Pzixel No living creature is still working on it, because the project was shut down. But Joe Duffy's blog has lots of interesting details. See my links above.

We've gone off the rails here, and I feel like I'm repeating myself, but that's part of "the presence of keywords"- the await keyword. If you replace the keywords with APIs like spawn and join, you can be fully-async (like Midori) but without any dichotomy (unlike Midori).

Or in other words, like I said before, it's not fundamental- we only have it because we want the choice.

@CyrusNajmabadi sorry for pinging you again, but here is some additional information about the decision making.

If you don't want to be mentioned again just tell me, please. I merely thought you may be interested.

From the #wg-net discord channel:

@cramertj
food for thought: i'm often writing Ok::<(), MyErrorType>(()) at the end of async { ... } blocks. perhaps there's something we can come up with to make constraining the error type easier?

@withoutboats
[...] possibly we want it to be consistent with [try]?

(I remember some relatively recent discussion on how try blocks could declare their return type, but I can't find it now...)

mentioned hues:

async -> io::Result<()> {
    ...
}

async: io::Result<()> {
    ...
}

async as io::Result<()> {
    ...
}

One thing that try can do which is less ergonomic with async is to use a variable binding or type ascription, e.g.

let _: io::Result<()> = try { ... };
let _: impl Future<Output = io::Result<()>> = async { ... };

I had previously tossed around the idea of allowing fn-like syntax for the Future trait, e.g. Future -> io::Result<()>. That would make manual-type-providing option look a bit better, though it's still a lot of characters:

let _: impl Future -> io::Result<()> = async {
}
async -> impl Future<Output = io::Result<()>> {
    ...
}

would be my pick.

It's similar to the existing closure syntax:

|x: i32| -> i32 { x + 1 };

Edit: And eventually when it's possible for TryFuture to implement Future:

async -> impl TryFuture<Ok = i32, Error = ()> {
    ...
}

Edit2: To be precise the above would work with today's trait definitions. It's just that a TryFuture type isn't as useful today because it currently doesn't implement Future

@MajorBreakfast Why -> impl Future<Output = io::Result<()>> rather than -> io::Result<()>? We already do the return-type-desugaring for async fn foo() -> io::Result<()>, so IMO if we use a ->-based syntax it seems clear that we would want the same sugar here.

@cramertj Yes it should be consistent. My post above kinda assumes that I can convince you all that the outer return type approach is superior 😁

In case we go with async -> R { .. } then presumably we should also go with try -> R { .. } as well as using expr -> TheType in general for type ascription. In other words, the type ascription syntax we use should be uniformly applied everywhere.

@Centril I agree. It should be usable everywhere. I'm just not sure anymore whether -> is really the right choice. I associate -> with being callable. And async blocks aren't.

@MajorBreakfast I basically agree; I think we should use : for type ascription, so async : Type { .. }, try : Type { .. }, and expr : Type. We've discussed the potential ambiguities on Discord and I think we found a way forward with : that makes sense...

Another question is about Either enum. We already have Either in futures crate. It's also is confusing because it looks just like Either from either crate when it's not.

Because Futures seems to be merged in std (at least very basic parts of it) could we also include Either there? It's crucial to have them in order to be able to return impl Future from the function.

For example, I often write code like following:

fn handler() -> impl Future<Item = (), Error = Bar> + Send {
    someFuture()
        .and_then(|x| {
            if condition(&x) {
                Either::A(anotherFuture(x))
            } else {
                Either::B(future::ok(()))
            }
        })
}

I'd like to be able to write it like:

async fn handler() -> Result<(), Bar> {
    let x = await someFuture();
    if condition(&x) {
        await anotherFuture(x);
    }
}

But as I understand when async gets expanded it requires Either to be inserted here, because we either go into condition or we don't.

_You can find actual code here if you wish. You can see that it has lots of Eithers and they all seems to exist in expanded code_

@Pzixel you won't need Either inside async functions, as long as you await the futures then the code transformation that async does will hide those two types internally and present a single return type to the compiler.

@Pzixel Also, I (personally) hope Either is not going to be introduced with this RFC, because that'd be introducing a restricted version of https://github.com/rust-lang/rfcs/issues/2414 (that works only with 2 types and only with Futures), thus likely adding API cruft if a general solution is ever merged -- and as @Nemo157 mentioned it doesn't seem to be an emergency to have Either right now :)

@Ekleog sure, I just was hitted by this idea that I actually have tons of either's in my existing async code and I'd really like to get rid of them. Then I recalled my confusion when I spent ~half of an hour until I realized that it doesn't compile because I had either crate in dependencies (future errors are quite hard to understand so it took quite a long). So this is why I have written the comment, just to be sure this problem is addressed somehow.

Of course, this is not related to async/await only, it's more generic thing, so it deserves its own RFC. I only wanted to emphasize that either futures should know about either or vice versa (in order to implement IntoFuture correctly).

@Pzixel The Either exported by the futures crate is a reexport from the either crate. The futures crate 0.3 can't implement Future for Either because of the orphan rules. It's highly likely that we're going to also remove the Stream and Sink impls for Either for consistency and offer an alternative instead (dicussed here). Additionally, the either crate could then implement Future, Stream and Sink itself, likely under a feature flag.

That said, as @Nemo157 already mentioned, when working with futures, it's better to just use async functions instead of Either.

The async : Type { .. } stuff is now proposed in https://github.com/rust-lang/rfcs/pull/2522.

Are async/await functions automatically implementing Send already implemented?

It looks like the following async function is not (yet?) Send:

pub async fn __receive() -> ()
{
    let mut chan: futures::channel::mpsc::Receiver<Box<Send + 'static>> = None.unwrap();

    await!(chan.next());
}

Link to full reproducer (that doesn't compile on the playground for lack of futures-0.3, I guess) is here.

Also, when investigating this issue I came upon https://github.com/rust-lang/rust/issues/53249, which I guess should be added to the tracking list of the topmost post :)

Here's a playground showing that async/await functions implementing Send _should_ work. Uncommenting the Rc version correctly detects that function as non-Send. I can take a look at your specific example in a bit (no Rust compiler on this machine :slightly_frowning_face:) to try and work out why it's not working.

@Ekleog std::mpsc::Receiver isn't Sync, and the async fn you wrote holds a reference to it. References to !Sync items are !Send.

@cramertj Hmm… but, am I not holding an owned mpsc::Receiver, which should be Send iff its generic type is Send? (also, it's not a std::mpsc::Receiver but a futures::channel::mpsc::Receiver, which is Sync too if the type is Send, sorry for not noticing the mpsc::Receiver alias was ambiguous!)

@Nemo157 Thanks! I've opened https://github.com/rust-lang/rust/issues/53259 in order to avoid too much noise on this issue :)

The question of whether and how async blocks allow ? and other control flow might warrant some interaction with try blocks (e.g. try async { .. } to allow ? without similar confusion to return?).

This means the mechanism for specifying an async block's type may need to interact with the mechanism for specifying a try block's type. I left a comment on the ascription syntax RFC: https://github.com/rust-lang/rfcs/pull/2522#issuecomment-412577175

Just hit what at first I thought was a futures-rs issue, but it turns out it may actually be an async/await issue, so here it is: https://github.com/rust-lang-nursery/futures-rs/issues/1199#issuecomment-413089012

As discussed a few days ago on discord, await has not yet been reserved as a keyword. It is pretty critical to get this reservation in (and added to the 2018 edition keyword lint) before the 2018 release. It’s a slightly complicated reservation since we want to continue using the macro syntax for now.

Will the Future/Task API have a way to spawn local futures?
I see that there's SpawnLocalObjError, but it seems to be unused.

@panicbit At the working group we're currently discussing whether it makes sense to include spawning functionality in the context at all. https://github.com/rust-lang-nursery/wg-net/issues/56

(SpawnLocalObjError is not entirely unused: LocalPool of the futures crate uses it. You're correct, however, that nothing in libcore uses it)

@withoutboats I noticed a few of the links in the issue description are out of date. Specifically, https://github.com/rust-lang/rfcs/pull/2418 is closed and https://github.com/rust-lang-nursery/futures-rs/issues/1199 has been moved to https://github.com/rust-lang/rust/issues/53548

NB. The name of this tracking issue is async/await but it is assigned to the task API as well! The task API currently has a stabilization RFC pending: https://github.com/rust-lang/rfcs/pull/2592

Any chance to make the keywords reusable for alternative async implementations? currently it creates a Future, but it's kind of a missed opportunity to make push based async more usable.

@aep It is possible to easily convert from a push-based system into the pull-based Future system by using oneshot::channel.

As an example, JavaScript Promises are push-based, so stdweb uses oneshot::channel to convert JavaScript Promises into Rust Futures. It also uses oneshot::channel for some other push-based callback APIs, like setTimeout.

Because of Rust's memory model, push based Futures have extra performance costs compared to pull. So it's better to pay that performance cost only when needed (e.g. by using oneshot::channel), rather than having the entire Future system be push-based.

Having said that, I'm not a part of the core or lang teams, so nothing I say has any authority. It's just my personal opinion.

its actually the other way around in resource constrained code. pull models have a penalty because you need the resource inside the thing that's being pulled rather than feeding the next ready value through a stack of waiting functions. The futures.rs design is simply too expensive for anything close to hardware, such as network switches (my use case) or game renderers.

However, in this case all we'd need here is make async emit something like Generator does. As i said before, i think async and generators are actually the same thing if you sufficiently abstract it instead of binding two keywords to a single library.

However, in this case all we'd need here is make async emit something like Generator does.

async at this point is literally a minimal wrapper around a generator literal. I’m having difficulty seeing how generators help with push based async IO, don't you rather need a CPS transform for those?

Could you be more specific about what you mean by "you need the resources inside the thing that's being pulled?" I'm not sure why you would need that, or how "feeding the next ready value through a stack of waiting functions" is any different from poll().

I was under the impression that push-based futures were more expensive (and thus harder to use in constrained environments). Allowing for arbitrary callbacks to be attached to a future requires some form of indirection, typically heap allocation, so instead of allocating once with the root future you allocate at every combinator. And cancellation also becomes more complex due to thread safety issues, so you either don't support it or you require all callback completions to use atomic ops to avoid racing. All of that adds up to a much harder to optimize framework, as far as I can tell.

don't you rather need a CPS transform for those?

yeah, current generator syntax doesnt work for that because it doesnt have arguments for the continuation, which is why i was hoping async would bring in ways to do it.

you need the resources inside the thing that's being pulled?

that's my terrible way of saying that reversing the order async works twice has cost. I.e. once from hardware to futures and back again using channels. You need to carry a whole bunch of stuff that has zero benefits in near-hardware code.

A common example would be that you cant just invoke the future stack when you know a file descriptor of a socket is ready, but instead have to implement all the execution logic to map real world events to futures, which has external cost such as locking, code size and most importantly code complexity.

Allowing for arbitrary callbacks to be attached to a future requires some form of indirection

yes i understand callbacks are expensive in some environments (not in mine, where execution speed is irrelevant, but i have 1MB total memory, so futures.rs doesnt even fit on flash), however, you dont need dynamic dispatch at all when you have something like continuations (which the current generator concept half-implements).

And cancellation also becomes more complex due to thread safety

i think we're confusing things here. I'm not advocating for callbacks. Continuations can be static stacks just fine. For example what we implemented in the clay language is just a generator pattern that you can use for push or pull. i.e.:

async fn add (a: u32) -> u32 {
    let b = await
    a + b
}

add(3).continue(2) == 5

i guess i can just continue doing that with a macro, but i feel like its a missed opportunity here wasting a language keyword on one specific concept.

not in mine, where execution speed is irrelevant, but i have 1MB total memory, so futures.rs doesnt even fit on flash

I'm pretty sure the current futures are intended to run in memory-constrained environments. What exactly is taking up so much space?

Edit: this program takes 295KB disk space when compiled --release on my macbook (basic hello world takes 273KB):

use futures::{executor::LocalPool, future};

fn main() {
    let mut pool = LocalPool::new();
    let hello = pool.run_until(future::ready("Hello, world!"));
    println!("{}", hello);
}

not in mine, where execution speed is irrelevant, but i have 1MB total memory, so futures.rs doesnt even fit on flash

I'm pretty sure the current futures are intended to run in memory-constrained environments. What exactly is taking up so much space?

Also what do you mean by memory? I have run current async/await based code on devices with 128 kB flash/16 kB ram. There are definitely memory use issues with async/await currently, but those are mostly implementation issues and can be improved by adding some additional optimisations (e.g. https://github.com/rust-lang/rust/issues/52924).

A common example would be that you cant just invoke the future stack when you know a file descriptor of a socket is ready, but instead have to implement all the execution logic to map real world events to futures, which has external cost such as locking, code size and most importantly code complexity.

Why? This still doesn't seem like anything that futures forces you into. You can just as easily call poll as you would a push-based mechanism.

Also what do you mean by memory?

I don't think this is relevant. This whole discussion has detailed into invalidating a point I didnt even intent to make. I'm not here to criticize futures beyond saying that bolting its design into the core language is a mistake.

My point is the async keyword can be made future proof if its done properly. Continuations is what I want, but maybe other people come up with even better ideas.

You can just as easily call poll as you would a push-based mechanism.

Yes that would make sense if Future:poll had call args. It cannot have them becuse poll needs to be abstract. Instead I'm proposing to emit a continuation from the async keyword, and impl Future for any Continuation with zero arguments.

It's a simple, low effort change that adds no cost to futures but allows reusability of keywords that are currently exclusively for one library.

But continations can of course be implemented with a preprocessor as well, which is what we're going to do. Unfortunately the desugar can only be a closure, which is more expensive than a proper continuation.

@aep How would we make it possible to reuse the keywords (async and await)?

@Centril my naive quick fix would be to lower an async to a Generator not to a Future. That'll allow time to make generator useful for proper continuations rather than being an exclusive backend for futures.

Its like a 10 lines PR maybe. But i dont have the patience to fight a bee hive over it, so i'll just build a preproc to desugar a different keyword.

I haven't been following the async stuff so apologies if this has been discussed before / elsewhere but what's the (implementation) plan for supporting async / await in no_std?

AFAICT the current implementation uses TLS to pass a Waker around but there's no TLS (or thread) support in no_std / core. I heard from @alexcrichton that it might be possible to get rid of the TLS if / when Generator.resume gains support for arguments.

Is the plan to block stabilization of async / await on no_std support being implemented? Or are we sure that no_std support can be added without changing any of the pieces that will be stabilized to ship std async / await on stable?

@japaric poll now takes the context explicitly. AFAIK, TLS should no longer be required.

https://doc.rust-lang.org/nightly/std/future/trait.Future.html#tymethod.poll

Edit: not relevant for the async/await, only for futures.

[...] are we sure that no_std support can be added without changing any of the pieces that will be stabilized to ship std async / await on stable?

I believe so. The relevant pieces are the functions in std::future, these are all hidden behind an additional gen_future unstable feature that will never be stabilized. The async transform uses set_task_waker to store the waker into TLS then await! uses poll_with_tls_waker to get access to it. If generators get resume argument support then instead the async transform can pass the waker in as the resume argument and await! can read it out of the argument.

EDIT: Even without generator arguments I believe this could also be done with some slightly more complicated code in the async transform. I would personally like to see generator arguments added for other use-cases, but I pretty certain that removing the TLS requirement with/without them will be possible.

@japaric Same boat. Even if someone made futures work on embedded, its very risky since its all Tier3.

I figured out an ugly hack that requires far less work than fixing async: weave in an Arc through a stack of Generators.

  1. see the "Poll" argument https://github.com/aep/osaka/blob/master/osaka-dns/src/lib.rs#L76 its an Arc
  2. registering something in the poll thing at Line 87
  3. yield to generate a continuation point at line 92
  4. call a generator from a generator to create a higher level stack at line 207
  5. finally executing the whole stack by passing in a runtime at line 215

Ideally they'd just lower async to a "pure" closure stack rather than a Future so you would not need any runtime assumptions and you could then push in the impure environment as an argument at the root.

I was halfway at implemening that

https://twitter.com/arvidep/status/1067383652206690307

but kind of pointless to go all the way if i'm the only one wanting it.

And I couldn't stop thinking about whether TLS-less async/await without generator arguments is possible, so I implemented a no_std proc-macro based async_block!/await! macro pair using just local variables.

It definitely requires a lot more subtle safety guarantees than the current TLS based solution or a generator argument based solution (at least when you just assume the underlying generator arguments are sound), but I'm pretty sure it's sound (as long as no one uses the rather large hygiene hole I couldn't find a way around, this wouldn't be an issue for an in-compiler implementation as it can use unnameable gensym idents to communicate between the async transform and await macro).

I just realized that there's no mention of moving await! from std to core in the OP, maybe #56767 could be added to the list of issues to resolve before stabilization to track this.

@Nemo157 As await! isn't expected to be stabilized it's not a blocker anyways.

@Centril I don't know who told you await! isn't expected to be stabilized... :wink:

@cramertj He meant the macro version not the keyword version i believe...

@crlf0710 what's about the implicit await/explicit async-block version?

@crlf0710 I did as well :)

@cramertj Don't we want to remove the macro because there's currently an ugly hack in the compiler that makes the existence of both await and await! possible? If we stabilize the macro, we'll never be able to remove it.

@stjepang I really don't care too much in any direction about the syntax of await!, aside from a general preference for postfix notations and a dislike of ambiguity and unpronounceable/un-Google-able symbols. As far as I'm aware, the current suggestions (with ? to clarify precedence) are:

  • await!(x)? (what we have today)
  • await x? (await binds tighter than ?, still prefix notation, needs parens to chain methods)
  • await {x}? (same as above, but temporarily require {} in order to disambiguate)
  • await? x (await binds less tightly, still prefix notation, needs parens to chain methods)
  • x.await? (looks like a field access)
  • x#/x~/etc. (some symbol)
  • x.await!()? (postfix-macro-style, @withoutboats and I think perhaps others aren't postfix-macros fans because they expect . to allow type-based dispatch, which it would not for postfix macros)

I think that the best route to shipping is to land await!(x), un-keyword-ify await, and eventually someday sell folks on the niceness of postfix macros, allowing us to add x.await!(). Other people have different opinions ;)

I follow this issue very loosely, but here is my opinion:

Personally I like the await! macro as it is and as it's described here: https://blag.nemo157.com/2018/12/09/inside-rusts-async-transform.html

It's not any kind of magic or new syntax, just a regular macro. Less is more, after all.

Then again, I also preferred try! , as Try still isn't stabilized. However, await!(x)? is a decent compromise between sugar and obvious named actions, and I think it works well. Furthermore, it could potentially be replaced by some other macro in a third-party library to handle extra functionality, such as debug tracing.

Meanwhile async/yield is "just" syntactic sugar for generators. It reminds me of the days where JavaScript was getting async/await support and you had projects like Babel and Regenerator that transpiled async code to use generators and Promises/Futures for async operations, essentially just like we're doing.

Keep in mind that eventually we'll want async and generators to be distinct features, potentially even composable with each other (producing a Stream). Leaving await! as a macro that just lowers to yield is not a permanent solution.

Leaving await! as a macro that just lowers to yield is not a permanent solution.

It can't permanently be user-visible that it lowers to yield, but it can certainly continue to be implemented that way. Even when you have async + generators = Stream you can still use e.g. yield Poll::Pending; vs. yield Poll::Ready(next_value).

Keep in mind that eventually we'll want async and generators to be distinct features

Are async and generators not distinct features? Related, of course, but comparing this again to how JavaScript did it, I always thought async would be built on top of generators; that the only difference being an async function would return and yield Futures as opposed to any regular value. An executor would be required to evaluate and wait on the async function to execute. Plus some extra lifetime stuff I'm not sure.

In fact, I once wrote a library about this exact thing, recursively evaluating both async functions and generator functions that returned Promises/Futures.

@cramertj It can't be implemented that way if the two are distinct "effects." There's some discussion around this here: https://internals.rust-lang.org/t/pre-rfc-await-generators-directly/7202. We don't want to yield Poll::Ready(next_value), we want to yield next_value, and have awaits elsewhere in the same function.

@rpjohnst

We don't want to yield Poll::Ready(next_value), we want to yield next_value, and have awaits elsewhere in the same function.

Yes, of course that's what it'd appear like to the user, but in terms of the desugaring you just have to wrap yields in Poll::Ready and add a Poll::Pending to the yield generated from await!. Syntactically to end-users they appear as separate features, but they can still share an implementation in the compiler.

@cramertj Also this one:

  • await? x

@novacrazy Yes, they are distinct features, but they should be composable together.

And indeed in JavaScript they are composable:

https://thenewstack.io/whats-coming-up-in-javascript-2018-async-generators-better-regex/

“Async generators and iterators are what you get when you combine an async function and an iterator so it’s like an async generator you can wait in or an async function you can yield from,” he explained. Previously, ECMAScript allowed you to write a function you could yield in or wait in but not both. “This is really convenient for consuming streams which are becoming more and more part of the web platform, especially with the Fetch object exposing streams.”

The async iterator is similar to the Observable pattern, but more flexible. “An Observable is a push model; once you subscribe to it, you get blasted with events and notifications at full speed whether you’re ready or not, so you have to implement buffering or sampling strategies to deal with chattiness,” Terlson explained. The async iterator is a push-pull model — you ask for a value and it gets sent to you — which works better for things like network IO primitives.

@Centril ok, opened #56974, is that correct enough to be added as an unresolved question to the OP?


I really don't want to get into the await syntax bikeshed again, but I have to respond to at least one point:

Personally I like the await! macro as it is and as it's described here: https://blag.nemo157.com/2018/12/09/inside-rusts-async-transform.html

Note that I also said I don't believe that the macro can stay a library implemented macro (ignoring whether or not it will continue to appear as a macro to users), to expand on the reasons:

  1. Hiding the underlying implementation, as one of the unresolved issues says you can currently create a generator by using || await!().
  2. Supporting async generators, as @cramertj mentions this requires differentiating between the yields added by await and other yields written by the user. This _could_ be done as a pre-macro-expansion stage, _if_ users never wanted to yield inside macros, but there are very useful yield-in-macro constructs like yield_from!. With the constraint that yields in macros must be supported this requires await! to be a builtin macro at least (if not actual syntax).
  3. Supporting async fn on no_std. I know of two ways to implement this, both ways require the async fn-created-Future and await to share an identifier that the waker is stored in. The only way I can see to have a hygienically safe identifier shared between these two places is if both are implemented in the compiler.

I think there's a bit of confusion here-- it was never the intention that await! be publicly visibly expandable to a wrapper around calls to yield. Any future for the await! macro-like syntax will rely on an implementation not unlike that of the current compiler-supported compile_error!, assert!, format_args! etc. and would be able to desugar to different code depending on the context.

The only important bit to understand here is that there isn't a significant semantic difference between any of the proposed syntaxes-- they're just surface syntax.

I would write an alternative to solve the await syntax.

First of all I like the idea that put the await as a postfix operator. But expression.await is too much like a field, as already pointed out.

So my proposal is expression awaited. The disadvantage here is awaited is not yet preserved as a keyword, but it is more natual in English and yet there is no such expressions (I mean, grammar forms like expression [token]) is valid in Rust right now, so this can be justified.

Then we can write expression? awaited for awaiting a Result<Future,_>, and expression awaited? for awaiting a Future<Item=Result<_,_>>.

@earthengine

While I'm not sold on the awaited keyword, I think you're onto something.

The key insight here is: yield and await are like return and ?.

return x returns value x, while x? unwraps result x, returning early if it's Err.
yield x yields value x, while x awaited awaits future x, returning early if it's Pending.

There's a nice symmetry to it. Perhaps await really should be a postfix operator.

let x = x.do_something() await.do_another_thing() await;
let x = x.foo(|| ...).bar(|| ... ).baz() await;

I'm not a fan of a postfix awaited syntax for the exact reason @cramertj just showed. It reduces overall readability, especially for long expressions or chained expressions. It doesn't give any sense of nesting like await!/await would. It doesn't have the simplicity of ?, and we're running out of symbols to use for a postfix operator...

Personally, I'm still in favor of await! for the reasons I described previously. It feels Rust-y and no-nonsense.

It reduces overall readability, especially for long expressions or chained expressions.

In Rustfmt standards, the example shall be written

let x = x.do_something() await
         .do_another_thing() await;
let x = x.foo(|| ...)
         .bar(|| ...)
         .baz() await;

I can hardly see how this affects readablity.

I like postfix await as well. I think that using a space would be unusual and would tend to break mental grouping. However, I do think that .await!() would pair nicely, with ? fitting either before or after, and the ! would allow for control-flow interactions.

(That doesn't require a fully general postfix macro mechanism; the compiler could special-case .await!().)

I started out really strongly disliking the postfix await (with no . or ()) since it looks pretty strange-- folks coming from other languages will get a good chuckle at our expense for sure. That's a cost we should take seriously. However, x await is clearly not a function call or a field access (x.await/x.await()/await(x) all have this problem) and there're fewer funky precedence issues. This syntax would clearly resolve ? and method access precedence, e.g. foo await? and foo? await both have clear precedence ordering to me, as do foo await?.x and foo await?.y (not denying that they look odd, only arguing that the precedence is clear).

I also think that

stream.for_each(async |item| {
    ...
}) await;

reads more nicely than

await!(stream.for_each(async |item| {
    ...
});

All in all, I'd be in support of this.

@joshtriplett RE .await!() we should talk separately-- I was initially in favor of this as well, but I don't think we should land this if we can't also get postfix macros in general, and I think there's a good deal of standing opposition towards them (with pretty good, albeit unfortunate reason), and I'd really like for that not to block await stabilization.

Why not both?

macro_rules! await {
    ($e:expr) => {{$e await}}
}

I do see the appeal of postfix more now, and bordering on liking it more in some scenarios. Especially with the above cheat, which is so simple it doesn't even need to be provided by Rust itself.

So, +1 for postfix.

I do think we should also have a prefix function, in addition to the postfix version.

As for specifics of postfix syntax, I'm not trying to say that .await!() is the only viable postfix syntax; I'm just not a fan of postfix await with a leading space.

It looks passable (though still unusual) when you format it with one statement per line, but much less reasonable when you format simple statements on one line.

For those don't like postfix keyword operators, we can define a proper symbolic operator for await.

Right now, we were kind of run out of operators in simple ASCII characters for the postfix operator. However how about

let x = do_something()⌛.do_somthing_else()⌛;

If we really need plain ASCII, I came up with (inspired by the shape above)

let x = do_something()><.do_somthing_else()><;

or (simular shape in horizential position)

let x = do_something()>=<.do_somthing_else()>=<;

Another idea, is to make the await struct a bracket.

let x = >do_something()<.>do_something_else()<;

All those ASCII solutions, share the same passing issue that <..> is overly used already and we have parsing issues with < and >. However, >< or >=< might be better for this as they require no space inside the operator, and no open <s in current position.


For those just don't like the space in between but OK for postfix keyword operators, how about use hyphens:

let x = do_something()-await.do_something_else()-await;

About having many different ways of writing the same code, i don't i personally don't like it. The main reason it much harder for people who are new to have proper understanding what it the right way or point of having it. The second reason is that we will have many different project which use different syntax and it would be harder to jump between and read it (specially for new comers to the rust). I think that different syntax should be implemented only if there is actually difference and it gives some advantages. A lot of code sugar is just make it much harder to learn and work with the language.

@goffrie Yes I agree that we should not have many different ways to do the same thing. However I was just proposing different alternatives, the community only need to pick one. Therefore this is not really a concern.

Furthermore, in terms of await! macro there is no way to stop the user inventing their own macros to do it differently, and Rust is intended to enable this. Therefore "having many different ways to do the same thing" is inevitable.

I think that simple dumb macro I showed demonstrates that no matter what we do, user's will do whatever they want anyway. A keyword, be it prefix or postfix, can be made into either a function-like prefix macro, or presumably into a postfix method-like macro, whenever those exist. Even if we chose function or method -like macros for await, they could be inverted with yet another macro. It really doesn't matter.

Therefore, we should focus on flexibility and formatting. Provide a solution that would most easily fill all those possibilities.

Furthermore, although in this short time I have grown attached to the postfix keyword syntax, await should mirror whatever is decided for yield with generators, which is probably a prefix keyword. For users that desire a postfix solution, method-like macros will probably exist eventually.

My conclusion is that a prefix keyword await is the best default syntax for now, perhaps with a regular crate providing users with a function-like await! macro, and in the future a postfix method-like .await!() macro.

@novacrazy

Furthermore, although in this short time I have grown attached to the postfix keyword syntax, await should mirror whatever is decided for yield with generators, which is probably a prefix keyword.

The expression yield 42 is at the type !, whereas foo.await is at type T where foo: impl Future<Output = T>. @stjepang makes the right analogy with ? and return here. await is not like yield.

Why not both?

macro_rules! await {
    ($e:expr) => {{$e await}}
}

You'll need to name the macro something else because await should remain a true keyword.


For a variety of reasons, I'm opposed to prefix await and even more so the block form await { ... }.

First there are the precedence issues with await expr? where the consistent precedence is await (expr?) but you want (await expr)?. As a solution to the precedence issues, some have suggested await? expr in addition to await expr. This entails await? as a unit and special casing; that seems unwarranted, a waste of our complexity budget, and an indication that await expr has serious problems.

More importantly, Rust code, and in particular the standard library is heavily centered around the power of the dot and method call syntax. When await is prefix, it encourages the user to invent temporary let bindings instead of simply chaining methods. This is the reason why ? is postfix, and for the same reason, await should also be postfix.

Even worse would be await { ... }. This syntax would, if formatted consistently according to rustfmt, turn into:

    let x = await { // by analogy with `loop`
        foo.bar.baz.other_thing()
    };

This would not be ergonomic and would significantly bloat the vertical length of functions.


Instead, I think awaiting, like ?, should be postfix since that fits with the Rust ecosystem that is centered around method chaining. A number of postfix syntaxes have been mentioned; I'll go through some of them:

  1. foo.await!() -- This is the postfix macro solution. While I'm strongly in favor of postfix macros, I concur with @cramertj in https://github.com/rust-lang/rust/issues/50547#issuecomment-454225040 that we should not do this unless we also commit to postfix macros in general. I also think that using a postfix macro in this way gives a rather non-first-class feeling; we should imo avoid making a language construct use macro syntax.

  2. foo await -- This is not so bad, it truly works like a postfix operator (expr op) but I feel as tho there's something missing with this formatting (i.e. it feels "empty"); In contrast, expr? attaches ? directly onto expr; there's no space here. This makes ? look visually appealing.

  3. foo.await -- This has been criticized for looking like a field access; and that is true. We should however remember that await is a keyword and thus it will be syntax highlighted as such. If you read Rust code in your IDE or equivalently on GitHub, await will be in a different color or boldness than foo is. Using a different keyword we can demonstrate this:

    let x = foo.match?;
    

    Customarily, fields are also nouns whereas await is a verb.

    While there's an initial ridicule factor about foo.await, I think it should be given serious consideration as a visually appealing while also readable syntax.

    As a bonus, using .await gives you the power of the dot and the auto-completion the dot usually has in IDEs (see page 56). For example, you can write foo. and if foo happens to be a future, await will be shown as the first choice. This facilitates both ergonomics and developer productivity since reaching for the dot is a thing many developers have trained into muscle memory.

    In all the possible postfix syntaxes, despite the criticism about looking like field access, this remains my favorite syntax.

  4. foo# -- This uses the sigil # to await on foo. I think considering a sigil is a good idea given that ? is also a sigil and because it makes awaiting light weight. Combined with ? it would look like foo#? -- that looks OK. However, # does not have a specific justification. Rather, it's merely a sigil that is still available.

  5. foo@ -- Another sigil is @. When combined with ?, we get foo@?. One justification for this specific sigil is that it looks a-ish (@wait).

  6. foo! -- Finally, there's !. When combined with ?, we get foo!?. Unfortunately, this has a certain WTF feeling to it. However, ! looks like forcing the value, which fits "await". There is one drawback in that foo!() already is a legal macro invocation, so awaiting and calling a function would need to be written (foo)!(). Using foo! as a syntax would also rob us of the chance to have keyword macros (e.g. foo! expr).

Another single sigil is foo~. The wave can be understood as "echo"s or "takes time". Yet it is not used anywhere in the Rust language.

Tilde ~ was used in the old days for the heap allocated type: https://github.com/rust-lang/rfcs/blob/master/text/0059-remove-tilde.md

Can ? be reused? Or is that too much magic? What would impl Try for T: Future look like?

@parasyte Yes I remember. But still it was long gone.

@jethrogb there's no way that I could see impl Try directly working, ? explicitly returns the result of Try from the current function while await needs to yield.

Maybe ? could be special-cased to do something else in the context of a generator so that it can either yield or return depending on the type of the expression it's applied to, but I'm not sure how understandable that would be. Also how would that interact with Future<Output=Result<...>>, would you have to let foo = bar()??; to both do the "await" and then get the Ok variant of the Result (or would ? in generators be based on a tristate trait that can yield, return or resolve to a value with a single application)?

That last parenthesized remark actually makes me think it could be workable, click to see a quick sketch
enum GenOp<T, U, E> { Break(T), Yield(U), Error(E) }

trait TryGen {
    type Ok;
    type Yield;
    type Error;

    fn into_result(self) -> GenOp<Self::Ok, Self::Yield, Self::Error>;
}
with `foo?` within a generator expanding to something like (although this has an ownership issue, and needs to also stack-pin the result of `foo`)
loop {
    match TryGen::into_result(foo) {
        GenOp::Break(val) => break val,
        GenOp::Yield(val) => yield val,
        GenOp::Return(val) => return Try::from_error(val.into()),
    }
}

Unfortunately I don't see how to handle the waker context variable in a scheme like this, maybe if ? were special-cased for async instead of generators, but if it is going to be special-cased here it would be nice if it were usable for other generators use-cases.

I had the same thought regarding re-use of ? as @jethrogb.

@Nemo157

there's no way that I could see impl Try directly working, ? explicitly returns the result of Try from the current function while await needs to yield.

Perhaps I'm missing some details on ? and the Try trait, but where/why is that explicit? And isn't a return in an async closure essentially the same as yield anyway, just a different state transition?

Maybe ? could be special-cased to do something else in the context of a generator so that it can either yield or return depending on the type of the expression it's applied to, but I'm not sure how understandable that would be.

I don't see why that should be confusing. If you think of ? as "continue or diverge", then it seems natural, IMHO. Granted, changing the Try trait to use different names for the associated return types would help.

Also how would that interact with Future<Output=Result<...>>, would you have to let foo = bar()??;

If you want to await the result and then also exit early on an Error result, then that would be the logical expression, yes. I don't think a special tri-state TryGen would be needed at all.

Unfortunately I don't see how to handle the waker context variable in a scheme like this, maybe if ? were special-cased for async instead of generators, but if it is going to be special-cased here it would be nice if it were usable for other generators use-cases.

I don't understand this part. Could you elaborate?

@jethrogb @rolandsteiner A struct could implement both Try and Future. In this case, which one should ? unwrap?

@jethrogb @rolandsteiner A struct could implement both Try and Future. In this case, which one should ? unwrap?

No it couldn't because of the blanket impl Try for T: Future.

Why is no one talking about the explicit construction and implicit await proposal? It is equal to sync io, just that it is blocking the task instead of the thread. i would even say that blocking a thread is more invasive than blocking a task, so why is there no special "await" syntax for thread-blocking-io?

but it's all just bike shading, i think we should settle for the simple macro syntax await!(my_future) at least for now

but it's all just bike shading, i think we should settle for the simple macro syntax await!(my_future) at least for now

No, it's not "just" bike-shedding as if that were something banal and insignificant. Whether await is written prefix or postfix fundamentally impacts how async code is written wrt. method chaining and how composable it feels. Stabilizing on await!(future) also entails that await as a keyword is relinquished which makes future usage of await as a keyword impossible. "At least for now" suggests that we can find a better syntax later and disregards the technical debt this entails. I'm opposed to knowingly introducing debt for a syntax that is meant to be replaced later.

Stabilizing on await!(future) also entails that await as a keyword is relinquished which makes future usage of await as a keyword impossible.

we could make it a keyword in the next epoch, requiring the raw ident syntax for the macro, just as we did with try.

@rolandsteiner

And isn’t a return in an async closure essentially the same as yield anyway, just a different state transition?

yield doesn’t exist in an async closure, it’s an operation introduced during the lowering from async/await syntax to generators/yield. In the current generator syntax yield is quite different to return, if the ? expansion is done before the generator transform then I don’t know how it would know when to insert a return or a yield.

If you want to await the result and the nalso exit early on an Error result, then that would be the logical expression, yes.

It might be logical, but it seems like a downside to me that a lot (most?) cases where you are writing async functions will be filled with double ?? to deal with the IO errors.

Unfortunately I don't see how to handle the waker context variable...

I don’t understand this part. Could you elaborate?

The async transform takes in a waker variable in the generated Future::poll function, this then needs to be passed into the transformed await operation. Currently this is handled with a TLS variable provided by std that both transforms reference, if ? were instead handled as a re-yield point _at the generators level_ then the async transform loses out on a way to insert this variable reference.

I wrote a blog post about await syntax outlining my preference as of two months ago. However, it basically assumed a prefix syntax, and just considered the precedence issue from that perspective. Here are some additional thoughts now:

  • My general opinion is that Rust has really stretched its unfamiliarity budget already. It would be ideal for the surface level async/await syntax to be as familiar to someone coming from JavaScript or Python or C# as possible. It would be ideal from this perspective to diverge only in minor ways from the norm. Postfix syntaxes vary on how far of a divergence they are (e.g. foo await is less of a divergence than some sigil like foo@), but they are all more divergent than prefix await.
  • I also prefer to stabilize a syntax that does not use !. Every user dealing with async/await will be left to wonder why await is a macro instead of a normal control flow construct, and I believe the story here will be essentially "well we couldn't figure out a good syntax so we just settled on making it look like a macro." This is not a compelling answer. I don't think the association between ! and control flow is really enough to justify this syntax: I believe ! has pretty specifically mean macro expension, which this is not.
  • I'm sort of dubious of the benefit of postfix await in general (not entirely, just sort of). I feel the balance is a bit different from ?, because awaiting is a more expensive operation (you yield in a loop until its ready, rather than just branching and returning once). I'm sort of suspicious of code that would await two or three times in a single expression; it seems fine to me to say these should be pulled out into their own let bindings. So the try! vs ? trade off does not pull as strongly to me here. But also, I'd be open to code samples that people think really shouldn't be pulled out into lets and is clearer as method chains.

That said, foo await is the most viable postfix syntax that I've seen so far:

  • It's relatively familiar for postfix syntax. All you have to learn is that await goes after the expression instead of before it in Rust, rather than significantly different syntax.
  • It clearly resolves the precedence issue that all of this has been about.
  • The fact that it doesnt work well with method chaining seems like an advantage to me almost, rather than a disadvantage, for the reasons I alluded to previously. I might be more compelled if we had some grammar rules that prevented foo await.method() just because I really feel the method is being (nonsensically) applied to await, not foo (whereas interestingly I don't feel that with foo await?).

I'm still leaning toward a prefix syntax, but I think await is the first postfix syntax that feels like it has a real shot to me.

Sidenote: it's always possible to use parens to make the precedence clearer:

let x = (x.do_something() await).do_another_thing() await;
let x = x.foo(|| ...).bar(|| ... ).baz() await;

This isn't exactly ideal, but considering that it's trying to cram a lot onto a single line, I think it's reasonable.

And as @earthengine mentioned before, the multi-line version is very reasonable (no extra parens):

let x = x.do_something() await
         .do_another_thing() await;

let x = x.foo(|| ...)
         .bar(|| ... )
         .baz() await;
  • It would be ideal for the surface level async/await syntax to be as familiar to someone coming from JavaScript or Python or C# as possible.

In the case of try { .. }, we took familiarity with other languages into account. However, it was also the right design from a POV of internal consistency with Rust. So with all due respect to those other languages, internal consistency in Rust seems more important and I don't think prefix syntax fits Rust either in terms of precedence or in how APIs are structured.

  • I also prefer to stabilize a syntax that does not use !. Every user dealing with async/await will be left to wonder why await is a macro instead of a normal control flow construct, and I believe the story here will be essentially "well we couldn't figure out a good syntax so we just settled on making it look like a macro." This is not a compelling answer.

I agree with this sentiment, .await!() will not look first class enough.

  • I'm sort of dubious of the benefit of postfix await in general (not entirely, just _sort of_). I feel the balance is a bit different from ?, because awaiting is a more expensive operation (you yield in a loop until its ready, rather than just branching and returning once).

I don't see what expensiveness has to do with extracting things into let bindings. Method chains can be and sometimes are expensive as well. The benefit of let bindings is to a) give sufficiently large pieces a name where it makes sense to enhance readability, b) being able to refer to the same computed value more than once (e.g. by &x or when a type is copyable).

I'm sort of suspicious of code that would await two or three times in a single expression; it seems fine to me to say these should be pulled out into their own let bindings.

If you feel that they should be pulled out into their own let bindings you can still make that choice with postfix await:

let temporary = some_computation() await?;

For those who disagree and prefer method chaining, postfix await gives the ability to choose. I also think that postfix better follows left-to-right reading and data-flow order here so even if you do extract to let bindings I'd still prefer postfix.

I also don't think you need to await two or three times for postfix await to be useful. Consider for example (this is the result of rustfmt):

    let foo = alpha()
        .beta
        .some_other_stuff()
        .await?
        .even_more_stuff()
        .stuff_and_stuff();

But also, I'd be open to code samples that people think really shouldn't be pulled out into lets and is clearer as method chains.

Most of the fuchsia code I read felt unnatural when extracted into let bindings and with let binding = await!(...)?;.

  • It's relatively familiar for postfix syntax. All you have to learn is that await goes after the expression instead of before it in Rust, rather than significantly different syntax.

My preference for foo.await here is mainly because you get nice autocompletion and the power of the dot. It doesn't feel so radically different as well. Writing foo.await.method() also makes it clearer that .method() is applied to foo.await. So it resolves that concern.

  • It clearly resolves the precedence issue that all of this has been about.

No, it's not just about precedence. Method chains are equally important.

  • The fact that it doesnt work well with method chaining seems like an advantage to me almost, rather than a disadvantage, for the reasons I alluded to previously.

I'm not sure why it doesn't work well with method chaining.

I might be more compelled if we had some grammar rules that prevented foo await.method() just because I really feel the method is being (nonsensically) applied to await, not foo (whereas interestingly I don't feel that with foo await?).

Whereas I would be uncompelled to go with foo await if we introduced an intentional design papercut and prevented method chaining with the postfix await syntax.

Granting that every option has a downside, and that one of them should nonetheless end up being chosen... one thing that bothers me about foo.await is that, even if we assume that it won't literally be mistaken for a struct field, it still looks like accessing a struct field. The connotation of field access is that nothing particularly impactful is happening -- it's one of the least-effectful operations in Rust. Meanwhile awaiting is highly impactful, one of the most side-effecting operations (it both performs the I/O operations built up in the Future and has control flow effects). So when I read foo.await.method(), my brain is telling me to kind of skip over the .await because it's relatively uninteresting, and I have to use attention and effort to manually override that instinct.

it still _looks like_ accessing a struct field.

@glaebhoerl You make good points; however, does syntax highlighting have no/insufficient impact on what it looks like and the way your brain processes things? At least for me color and boldness matters a great deal when reading code so I wouldn't skip over .await that has a different color than the rest of the things.

The connotation of field access is that nothing particularly impactful is happening -- it's one of the least-effectful operations in Rust. Meanwhile awaiting is highly impactful, one of the most side-effecting operations (it both performs the I/O operations built up in the Future and has control flow effects).

I strongly agree with this. await is a control flow operation like break or return, and should be explicit. The proposed postfix notation feels unnatural, like Python's if: compare if c { e1 } else { e2 } to e1 if c else e2. Seeing the operator at the end makes you do a double-take, regardless of any syntax highlighting.

I also don't see how e.await is more consistent with the Rust syntax than await!(e) or await e. There's no other postfix keyword, and since one of the ideas was to special-case it in the parser, I don't think that's a proof of being consistent.

There's also the familiarity issue @withoutboats mentioned. We can choose weird and wonderful syntax if it has some wonderful benefits. Does a postfix await have them, though?

does syntax highlighting have no/insufficient impact on what it looks like and the way your brain processes things?

(Good question, I'm sure it would have some impact, but it's hard to guess how much without actually trying it (and substituting a different keyword only gets so far). While we're on the subject... a long time ago I mentioned that I think syntax highlighting should highlight all operators with control flow effects (return, break, continue, ?... and now await) in some special extra-distinctive color, but I'm not in charge of the syntax highlighting for anything and don't know if anyone actually does this.)

I strongly agree with this. await is a control flow operation like break or return, and should be explicit.

We agree. The notations foo.await, foo await, foo#, ... are explicit. There's no implicit await being done.

I also don't see how e.await is more consistent with the Rust syntax than await!(e) or await e.

The syntax e.await per se isn't consistent with Rust syntax but postfix generally fits better with ? and how Rust APIs are structured (methods are preferred over free functions).

The await e? syntax, if associated as (await e)? is completely inconsistent with how break and return associate. await!(e) is also inconsistent since we don't have macros for control flow and it also has the same problem as other prefix methods.

There's no other postfix keyword, and since one of the ideas was to special-case it in the parser, I don't think that's a proof of being consistent.

I don't think you actually need to change libsyntax at all for .await since it should already be handled as a field operation. The logic would rather be dealt with in resolve or HIR where you translate it to a special construct.

We can choose weird and wonderful syntax if it has some wonderful benefits. Does a postfix await have them, though?

As aforementioned, I argue it does due to method chaining and Rust's preference for method calls.

I don't think you actually need to change libsyntax at all for .await since it should already be handled as a field operation.

This is fun.
So the idea is to reuse the self/super/...'s approach, but for fields rather than for path segments.

This effectively makes await a path segment keyword though (since it goes through resolution), so you may want to prohibit raw identifiers for it.

#[derive(Default)]
struct S {
    r#await: u8
}

fn main() {
    let s = ;
    let z = S::default().await; //  Hmmm...
}

There's no implicit await being done.

The idea came up a couple of times on this thread (the "implicit await" proposal).

we don't have macros for control flow

There is try! (which served its purpose pretty well) and arguably the deprecated select!. Note that await is "stronger" than return, so it's not unreasonable to expect it to be more visible in the code than ?'s return.

I argue it does due to method chaining and Rust's preference for method calls.

It also has a (more noticeable) preference for prefix control flow operators.

The await e? syntax, if associated as (await e)? is completely inconsistent with how break and return associate.

I prefer await!(e)?, await { e }? or maybe even { await e }? -- I don't think I've seen the latter discussed, and I'm not sure if it works.


I admit might have a left-to-right bias. _Note_

My opinion on this seems to change every time I look at the issue, as if playing Devil’s advocate to myself. Part of that is because I’m so used to writing my own futures and state machines. A custom future with poll is totally normal.

Perhaps this should be thought of another way.

To me, zero-cost abstractions in Rust refers to two things: zero-cost at runtime, and more importantly zero-cost mentally.

I can very easily reason about most abstractions in Rust, including futures, because they are just state machines.

To this end, a simple solution should exist that introduces as little magic to the user. Sigils especially are a bad idea, as they feel unnecessarily magical. This includes .await magic fields.

Perhaps the best solution is the easiest one, the original await! macro.

So with all due respect to those other languages, internal consistency in Rust seems more important and I don't think prefix syntax fits Rust either in terms of precedence or in how APIs are structured.

I don't see how...? await(foo)?/await { foo }? seems totally fine in terms of operator precedence and how APIs are structured in Rust- its downside is the wordiness of parens and (depending on your perspective) chaining, not breaking precedent or being confusing.

There is try! (which served its purpose pretty well) and arguably the deprecated select!.

I think the operative word here is deprecated. Using try!(...) is a hard error on Rust 2018. It is a hard error now because we introduced a better, first-class, and postfix syntax.

Note that await is "stronger" than return, so it's not unreasonable to expect it to be more visible in the code than ?'s return.

The ? operator can likewise be side-effecting (through other implementations than for Result) and performs control flow so it's quite "strong" as well. When it was discussed, ? was accused of "hiding a return" and being easy to overlook. I think that prediction completely failed to come true. The situation re. await seems quite similar to me.

It also has a (more noticeable) preference for prefix control flow operators.

Those prefix control flow operators are typed at ! type. Meanwhile, the other control flow operator ? that takes a context impl Try<Ok = T, ...> and gives you a T is postfix.

I don't see how...? await(foo)?/await { foo }? seems totally fine in terms of operator precedence and how APIs are structured in Rust-

The await(foo) syntax is not the same as await foo if parenthesis is required for the former and not for the latter. The former is unprecedented, the latter has precedence issues wrt. ? as we've discussed here, on boat's blog post, and on Discord. The await { foo } syntax is problematic for other reasons (see https://github.com/rust-lang/rust/issues/50547#issuecomment-454313611).

its downside is the wordiness of parens and (depending on your perspective) chaining, not breaking precedent or being confusing.

This is what I mean by "APIs are structured". I think methods and method chaining are common and idiomatic in Rust. The prefix and block syntaxes compose poorly with those and with ?.

I may be in the minority with this opinion, and if so, ignore me:

Would it be fair to move the prefix-vs-postfix discussion to an Internals thread, and then simply come back here with the outcome? That way we can keep the tracking issue to tracking the status of the feature?

@seanmonstar Yeah, I strongly sympathize with the desire to limit discussion on tracking issues and have issues that are really just status updates. This is one of the issues I hope we can tackle with some revisions to how we manage the RFC process and issues in general. For now, I've opened a new issue here for us to use for discussion.

IMPORTANT TO ALL: further await syntax discussion should go here.

Temporarily locking for a day to ensure that future discussion about await syntax occurs on the appropriate issue.

On Tue, Jan 15, 2019 at 07:10:32AM -0800, Pauan wrote:

Sidenote: it's always possible to use parens to make the precedence clearer:

let x = (x.do_something() await).do_another_thing() await;
let x = x.foo(|| ...).bar(|| ... ).baz() await;

That defeats the primary benefit of postfix await: "just keep
writing/reading". Postfix await, like postfix ?, allows control flow
to keep moving left to right:

foo().await!()?.bar().await!()

If await! were prefix, or back when try! was prefix, or if you have
to parenthesize, then you have to jump back to the left hand side of
the expression when writing or reading it.

EDIT: I was reading comments beginning-to-end via email, and didn't see the "move conversation to the other issue" comments until after sending this mail.

Async-await status report:

http://smallcultfollowing.com/babysteps/blog/2019/03/01/async-await-status-report/


I wanted to post a quick update on the status of the async-await
effort. The short version is that we're in the home stretch for
some kind of stabilization, but there remain some significant
questions to overcome.

Announcing the implementation working group

As part of this push, I'm happy to announce we've formed a
async-await implementation working group. This working group
is part of the whole async-await effort, but focused on the
implementation, and is part of the compiler team. If you'd like to
help get async-await over the finish line, we've got a list of issues
where we'd definitely like help (read on).

If you are interested in taking part, we have an "office hours"
scheduled for Tuesday (see the [compiler team calendar])
-- if you
can show up then on [Zulip], it'd be ideal! (But if not, just pop in any
time.)

...

When will std::future::Future be in stable? Does it have to wait for async await? I think it's a very nice design and would like to start porting code to it. (Is there a shim to use it in stable?)

@ry see the fresh tracking issue for it: https://github.com/rust-lang/rust/issues/59113

Another compiler issue for async/await: https://github.com/rust-lang/rust/issues/59245

Also note that https://github.com/rust-lang-nursery/futures-rs/issues/1199 in the top post can be checked off, as it's now fixed.

It looks like there's an issue with HRLB and async closures: https://github.com/rust-lang/rust/issues/59337. (Though, re-skimming the RFC it doesn't actually specify that async closures are subject to the same argument lifetime capture that async function have).

Yeah, async closures have a bunch of issues and shouldn't be included in the initial round of stabilization. The current behavior can be emulated with a closure + async block, and in the future I'd love to see a version that allowed referencing the closure's upvars from the returned future.

I've just noticed that currently await!(fut) requires that fut be Unpin: https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=9c189fae3cfeecbb041f68f02f31893d

Is that expected? It doesn't appear to be in the RFC.

@Ekleog that's not await! giving the error, await! conceptually stack-pins the passed future to allow !Unpin futures to be used (quick playground example). The error comes from the constraint on impl Future for Box<impl Future + Unpin>, that requires the future to be Unpin to stop you doing something like:

// where Foo: Future + !Unpin
let mut foo: Box<Foo> = ...;
Pin::new(&mut foo).poll(cx);
let mut foo = Box::new(*foo);
Pin::new(&mut foo).poll(cx);

Because Box is Unpin and allows moving the value from it you could poll the future once in one heap location, then move the future out of the box and put it into a new heap location and poll it again.

await should possibly be special cased to allow Box<dyn Future>since it consumes the future

Maybe the IntoFuture trait should be resurrected for await!? Box<dyn Future> can implement that by converting to Pin<Box<dyn Future>>.

Here comes my next bug with async/await: it looks like using an associated type to a type parameter in the return type of an async fn breaks inference: https://github.com/rust-lang/rust/issues/60414

In addition to potentially adding #60414 to the top-post's list (don't know if it's still being used -- maybe it'd be better to point to the github label?), I think the “Resolution of rust-lang/rfcs#2418” can be ticked, as IIRC the Future trait has recently been stabilized.

I just came from a Reddit post and I must say that I don't like postfix syntax at all. And it seems like the majority of Reddit don't like it either.

I rather write

let x = (await future)?

than accepting that weird syntax.

As for chaining, I can refactor my code to avoid having more than 1 await.

Also, JavaScript in the future can do this (smart pipeline proposal):

const x = promise
  |> await #
  |> x => x.foo
  |> await #
  |> x => x.bar

If prefix await is implemented, that doesn't mean await cannot be chained.

@KSXGitHub this really isn’t the place for this discussion but the logic is outlined here and there are very good reasons for it that have been thought out over many months by many people https://boats.gitlab.io/blog/post/await-decision/

@KSXGitHub While I also dislike the final syntax, it has been extensively discussed in #57640, https://internals.rust-lang.org/t/await-syntax-discussion-summary/, https://internals.rust-lang.org/t/a-final-proposal-for-await-syntax/, and in various other places. A lot of people have expressed their preference there, and you're not bringing any new arguments to the subject.

Please don't discuss the design decisions here, there's a thread for this explicit purpose

If you plan to comment there, please bear in mind that the discussion has already played out quite a bit: make sure you have something substantial to say, and make sure it's not been said before in the thread.

@withoutboats to my understanding the final syntax is agreed upon already, maybe it's time to mark it as Done? :blush:

Is the intent to stabilize in time for the next beta cut on July 4, or will blocking bugs require another cycle to resolve? There are plenty of open issues under the A-async-await tag, though I'm not sure how many of those are critical.

Aha, disregard that, I've just discovered the AsyncAwait-Blocking label.

Hello there! When we have to expect the stable release of this feature? And how can I use that in nightly builds?

@MehrdadKhnzd https://github.com/rust-lang/rust/issues/62149 contains information about the target release date and more

Is there plan to automatically implement Unpin for futures that are generated by async fn?

Specifically I'm wondering if Unpin is not available automatically due to generated Future code itself, or whether cuz we can use references as arguments

@DoumanAsh I suppose if an async fn never has any active self-references at yield points then the generated Future could conceivably implement Unpin, maybe?

I think that would need to be accompanied by some pretty helpful error messages saying "not Unpin because of _this_ borrow" + a hint of "alternatively you can box this future"

The stabilization PR at #63209 notes that "All the blockers are now closed." and was landed into nightly on August 20, therefore heading for the beta cut later this week. It seems worth noting that since August 20 some new blocking issues have been filed (as tracked by the AsyncAwait-Blocking tag). Two of these (#63710, #64130) appear to be nice-to-haves that would not actually impede stabilization, however there are three other issues (#64391, #64433, #64477) which seem worth discussing. These latter three issues are related, all of them arising due to PR #64292, which itself was landed to address AsyncAwait-Blocking issue #63832. A PR, #64584, has already landed in an attempt to address the bulk of the problems, but the three issues remain open for now.

The silver lining is that the three serious open blockers appear to concern code that should compile, but does not currently compile. In that sense, it would be backwards-compatible to land fixes later without impeding the beta-ization and eventual stabilization of async/await. However, I'm wondering whether anyone from the lang team thinks that anything here is concerning enough to suggest that async/await should bake on nightly for another cycle (which, as distasteful as that may sound, is the point of the rapid release schedule after all).

@bstrie We are just reusing "AsyncAwait-Blocking" for lack of a better label to note them as "high priority", they are not actually blocking. We should revamp the labeling system soon to make it less confusing, cc @nikomatsakis.

... Not good... we missed async-await in the expecting 1.38. Having to wait for 1.39, just because some "issues“ that didn't count...

@earthengine I don't think that's a fair assessment of the situation. The issues that have come up have all been worth taking seriously. It wouldn't be good to land async await only for people to then run into these issues trying to use it in practice :)

Was this page helpful?
0 / 5 - 0 ratings