Sentry-javascript: Return a promise from the main API capture* methods

Created on 3 May 2019  ·  16Comments  ·  Source: getsentry/sentry-javascript

Is this guaranteed to report the exception to Sentry?

try {
  // code
} catch (e) {
  const eventId = Sentry.captureException(e);
  console.log('sentry event', eventId);
  process.exit(1);
}

The documentation is not clear if the event id is generated locally or remotely. Is draining relevant at all? https://docs.sentry.io/error-reporting/configuration/draining/?platform=node

Breaking Documentation Improvement

Most helpful comment

Been having quite a bit of trouble with Sentry since we upgraded to @sentry/node, we're probably going to go back to raven-node until there's a better solution.

Having to drain queues on arbitrary timeouts at the end of every serverless execution definitely feels more like a hack, and not a solution. 🤔

All 16 comments

Thanks for highlighting this, to make it clear it's not fully synchronous, the "transport" is running in the background. So strictly speaking it is not guaranteed that the event hits Sentry.

The eventId is generated in the SDK locally and will be used to ingest the event on the server.

We will update our docs to make this clear.

Ok, so just to clarify, it is indeed necessary to implement what is outlined here: https://docs.sentry.io/error-reporting/configuration/draining/?platform=node

Is this correct, @HazAT?

@rhyek Corrent, if you want to make sure everything is sent, you can await flush to make sure everything is sent.

Wouldn't it be better to just expose the promise from the capture* methods? API would be clearer since returning a promise let users know that they have to await it to be sure the event is sent or just fire-and-forget hoping it gets sent. Both close and flush are methods of ambiguous utility that tries to make syncronous something that has been think of to be asyncronous. Plus, both these methods are totally broken: in fact, there are quite a few quirks (I would say that it's a bug but probably this depends on the POV of the person looking at them):

  • the purpose of the timeout passed to flush is null because it isn't respected by the transport by interrupting the sending of the requests. It just resolves the returned promise to false when the timer expires. I'm still asking myself the utility of such code and behavior...
  • the flush method returns a new promise every time it's called that is resolved as soon as the timeout is reached. There are two problems: the whole code executed by setInterval is extremely slow, at least in my tests in the console of the browser, and the promise resolved a lot of seconds after the timeout expired. The second and more important problem is that the code of the _isClientProcessing function clears the timeout every time it's called, so if someone is doing something like Promise.all([fush(), flush()]) (I don't know why he should do it, but since the API returns a new promise every time he may be tempted to do it) then such promise will never be resolved.

Corrent, if you want to make sure everything is sent, you can await flush to make sure everything is sent.

Incorrect, see my points above to understand why even awaiting flush you cannot be sure that everything is sent. I would say that such API design is totally broken and I can't really understand why not leave to the user the choice of awaiting the promises that represents the sending of an event instead of trying to build a shutdown API that cannot even work for all languages where Unified API should work

I think having access to a promise that resolves when the event is finally
sent out in addition to the other methods described in this thread makes a
lot of sense.

On Sat, May 11, 2019, 12:29 PM Stefano Arlandini notifications@github.com
wrote:

Wouldn't it be better to just expose the promise from the capture*
methods? API would be clearer since returning a promise let users know that
they have to await it to be sure the event is sent or just
fire-and-forget hoping it gets sent. Both close and flush are methods of
ambiguous utility that tries to make syncronous something that has been
think of to be asyncronous. Plus, both these methods are totally broken: in
fact, there are quite a few quirks (I would say that it's a bug but
probably this depends on the POV of the person looking at them):

  • the purpose of the timeout passed to flush is null because it isn't
    respected by the transport by interrupting the sending of the requests. It
    just resolves the returned promise to false when the timer expires.
    I'm still asking myself the utility of such code and behavior...
  • the flush method returns a new promise every time it's called that
    is resolved as soon as the timeout is reached. There are two problems:
    the whole code executed by setInterval is extremely slow, at least in
    my tests in the console of the browser, and the promise resolved a lot of
    seconds after the timeout expired. The second and more important problem is
    that the code of the _isClientProcessing function clears the timeout
    every time it's called, so if someone is doing something like Promise.all([fush(),
    flush()]) (I don't know why he should do it, but since the API returns
    a new promise every time he may be tempted to do it) then such promise will
    never be resolved.

Corrent, if you want to make sure everything is sent, you can await flush
to make sure everything is sent.

Incorrect, see my points above to understand why even awaiting flush you
cannot be sure that everything is sent. I would say that such API design is
totally broken and I can't really understand why not leave to the user the
choice of awaiting the promises that represents the sending of an event
instead of trying to build a shutdown API that cannot even work for all
languages where Unified API should work


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/getsentry/sentry-javascript/issues/2049#issuecomment-491533914,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAFLTSVSICZ7Y26UHCSMKWLPU4GATANCNFSM4HKUJ5ZQ
.

I would also say that I don't expect a method returning a result to run something in the background as this is source of confusion for users (and the main question of this issue of course). Instead, if a promise is returned then it's clear that it's just a placeholder for something that will came in the future and that is not ready yet. The choice to wait for it or just fire-and-forget it up to the user obviously. Also, since the event ID is generated locally (I don't see any reason to do it btw) you may use something (because it's returned immediatly) that is not valid because for some reason the transport failed to send the event and you don't even know about it

Been having quite a bit of trouble with Sentry since we upgraded to @sentry/node, we're probably going to go back to raven-node until there's a better solution.

Having to drain queues on arbitrary timeouts at the end of every serverless execution definitely feels more like a hack, and not a solution. 🤔

For those looking for more examples of what might work — I found some of the solutions within #1449 helpful around flushing and such, they discuss a lot of this issue too.

(They're focused on lambdas and serverless but the same concepts would likely work in other environments too as it's the flushing approach).

Definitely keen for there to be a better way at some point that doesn't involve flushing every time.

Completely agree with what is said on this thread, IMO it is really confusing to have a synchronous method that hides asynchrony on the background.
Maybe the sentry team has a very good reason to do it this way, anyway an update on the documentation would be really appreciated.

@cibergarri I think the reason for this is so that the eventId can be returned immediately without blocking userland code on a potentially slow network call to sentry apis. I agree that it's confusing to return a value from an asynchronous api and potentially leads to silent loss of errors in serverless execution contexts that don't stay around long enough for the asynchronous transport layer calls. One solution would be to split the function into two so to that a user needs to opt-in to asynchronous behavior, e.g.

Sentry.captureException() // -> returns eventId after successful submission to sentry
Sentry.captureExceptionAsync() // -> returns promise

Or as a parameter, e.g.

Sentry.captureException(ex, {async: false}) // ->  default, returns eventId after successful submission to sentry
Sentry.captureException(ex, {async: true}) // -> returns promise

The event ID is always generated in the client, so async or sync API does not matter and consequently it’s not a good indicator of whether an event has been sent or not for real (another issue imo). I also proposed a long time ago for the PHP SDK a similar solution that involved having both synchronous and asynchronous methods for each capture* method but it was rejected. I didn’t fully understand the real reason Unified API wants to hide asyncronicity and developers don’t want to expose a promise, but It seemed to me they were not so much open to change this.

Sadly, this issue made me switch away from Sentry as it made tracking errors in serverless environments too error prone. (Oh, the irony.)

capture*Async methods would have been super helpful 👍

Can we get an update on this?

@marcospgp what would you like to know exactly? Everything that Daniel wrote still stands true to this day.

I think that documentation apart, it’s pretty clear from reactions on comments and comments themselves that users would like to have access to the promises, and there is no good reason to not allow this imho

Unfortunately, it's not going to happen before v6 release, as it's breaking change to our main APIs and would require a lot of changes. I'll add this to the roadmap though to keep this noted.

Was this page helpful?
0 / 5 - 0 ratings