Sentry-javascript: [@sentry/node] AWS Lambda and other Serverless solutions support

Created on 28 Jul 2018  ·  77Comments  ·  Source: getsentry/sentry-javascript

  • @sentry/node version 4.0.0-beta.11
  • I'm using hosted Sentry

What is the current behavior?

I'm using @sentry/node to capture exception on AWS lambda function.

    .catch(err => {
      Sentry.captureException(err)
      context.fail()
    })

However, it kills the process when context.fail() is called and the exception does not end up in Sentry.

I could do a workaround like:

    .catch(err => {
      Sentry.captureException(err)
      setTimeout(() => context.fail(), 1000)
    })

What is the expected behavior?

It would be nice if I can do something like:

    .catch(err => {
      Sentry.captureException(err, () => context.fail())
    })

Or something globally handle callback.

Most helpful comment

@LinusU we'll most likely create a specific serverless package for this scenario. We just need to find some time, as it's the end of the year and things are getting crowdy now. Will keep you posted!

All 77 comments

This may help I guess https://blog.sentry.io/2018/06/20/how-droplr-uses-sentry-to-debug-serverless (it's using old raven version, which had a callback, but I'm mostly pointing to a callbackWaitsForEmptyEventLoop flag.

There's no official way yet, as we're still trying things out in beta, but it's doable with this code:

import { init, getDefaultHub } from '@sentry/node';

init({
  dsn: 'https://my-dsn.com/1337'
});

exports.myHandler = async function(event, context) {
  // your code

  await getDefaultHub().getClient().captureException(error, getDefaultHub().getScope());
  context.fail();
}

@kamilogorek Thank you for the pointer. I'll give it a try and play back the learnings.

@kamilogorek You suggestion works. I'm looking forward to a more official way.

@vietbui
In 4.0.0-rc.1 we introduced a function on the client called close, you call it like this:

import { getCurrentHub } from '@sentry/node';

getCurrentHub().getClient().close(2000).then(result => {
      if (!result) {
        console.log('We reached the timeout for emptying the request buffer, still exiting now!');
      }
      global.process.exit(1);
})

close will wait until all requests are sent, it will always resolve (result = false timeout was reached), up until the timeout is reached (in this example 2000ms).
This is our official API.
While the prev approach will still work, the close method works for all cases.

@HazAT Nice one. Thanks for all the hard work.

In 4.0.3 I call it like this in my lambda function:

try {
  ...
} catch (err) {
  await getCurrentHub().getClient().captureException(err, getCurrentHub().getScope())
  throw err
}

getDefaultHub() is no longer available.

@vietbui it's called getCurrentHub now, as we had to unify our API with other languages SDKs.

@kamilogorek Thanks for the clarification. There is a problem with getCurrentHub approach as somehow the scope I set up did not end up in Sentry.

In the end I took a different approach as suggested by @HazAT to capture exception in my lambda functions:

try {
  ...
} catch (err) {
  Sentry.captureException(err)
  await new Promise(resolve => Sentry.getCurrentHub().getClient().close(2000).then(resolve))
  throw err
}

And it works perfectly.

Is this the recommended way to wait/force sentry to send events?

@albinekb yes – https://docs.sentry.io/learn/draining/?platform=browser

This solution does not work for me for some reason. It only works the first time in production when there is a cold start time and does not work after that. here is example code

'use strict'

const Sentry =  require('@sentry/node')
Sentry.init({
  dsn: 'xxx',
  environment: process.env.STAGE
});

module.exports.createPlaylist = async (event, context, callback) => {
  context.callbackWaitsForEmptyEventLoop = false
  if(!event.body) {
    Sentry.captureException(error)
    await new Promise(resolve => Sentry.getCurrentHub().getClient().close(2000).then(resolve))
    return {
      statusCode: 500,
      headers: { 'Content-Type': 'text/plain' },
      body: 'Missing body parameters'
    }
  }
  return {
    statusCode: 200,
  }
};

@Andriy-Kulak Thats also stated in the docs:

After shutdown the client cannot be used any more so make sure to only do that right before you shut down the application.

So I don't know how we can handle this in lambda where we don't know when the application will be killed. Best would be to drain sentry per request like we could with the old API?

@HazAT could we reopen this, please? I think it's important to have a way to work with this on Lambda, which is becoming an increasingly common target to deploy to.

This is currently blocking me from upgrading to the latest version...

Personally, I would prefer being able to get a Promise/callback when reporting an error. Having a way to drain the queue without actually closing it afterward would be the next best thing...

What was the rationale of removing the callback from captureException?

@albinekb it does not work at all if I remove the following line

await new Promise(resolve => Sentry.getCurrentHub().getClient().close(2000).then(resolve))

@LinusU what is the solution and sentry or raven solution are you using?

For me basically the following works with sentry/node @4.3.0, but I have to make lambda manually wait some period of time (in this case I put 2 seconds) for sentry to do what it needs to do. Which I am not sure why it needs to be there because we are awaiting for sentry to finish the captureException request. If I don't have the waiting period afterwards, then sentry does not seem to send the error.

'use strict'

const Sentry =  require('@sentry/node')
Sentry.init({
  dsn: 'xxx',
  environment: process.env.STAGE
});

module.exports.createPlaylist = async (event, context, callback) => {
  context.callbackWaitsForEmptyEventLoop = false
  if(!event.body) {
    const error = new Error('Missing body parameters in createPlaylist')
    await Sentry.captureException(error)
    await new Promise(resolve => {setTimeout(resolve, 2000)})
    return {
      statusCode: 500,
      headers: { 'Content-Type': 'text/plain' },
      body: 'Missing body parameters'
    }
  }
  return {
    statusCode: 200,
  }
};

We're also getting burned on this on Lambda. We started with the new libs and are totally boxed out, considering to go back to Raven. We're writing tests right now to attempt to close the hub and then reinitialize, which would be a workable workaround if it holds water. But still hacky / likely to cause problems under load.

Personally I'd prefer some sort of flush() that returns a promise – hard to find a downside. Think it'd ever happen?

what is the solution and sentry or raven solution are you using?

I'm using the following express error handler:

app.use((err: any, req: express.Request, res: express.Response, next: express.NextFunction) => {
  let status = (err.status || err.statusCode || 500) as number

  if (process.env.NODE_ENV === 'test') {
    return next(err)
  }

  if (status < 400 || status >= 500) {
    Raven.captureException(err, () => next(err))
  } else {
    next(err)
  }
})

I'm then using scandium to deploy the Express app to Lambda

edit: this is with Raven "raven": "^2.6.3",

The dream API would be something like this 😍

Sentry.captureException(err: Error): Promise<void>

@LinusU https://github.com/getsentry/sentry-javascript/blob/master/packages/core/src/baseclient.ts#L145-L152 🙂

You have to use client instance directly to get it though. The reason for this is that decided that the main scenario is a "fire and forget" type of behavior, thus it's not an async method. Internally however, we do have async API which we use ourselves.

Seems that what I actually want is something more like:

const backend = client.getBackend()
const event = await backend.eventFromException(error)
await client.processEvent(event, finalEvent => backend.sendEvent(finalEvent))

In order to skip all the queueing and buffering...

I get that the design is tailored to "fire and forgot", and for running in a long-running server, and it's probably quite good at that since it does a lot of buffering, etc. The problem is that this is the exact opposite that you want for Lambda, App Engine, and other "serverless" architectures, which are becoming more and more common.

Would it be possible to have a special method that sends the event as fast as possible, and returns a Promise that we can await? That would be perfect for the serverless scenarios!

class Sentry {
  // ...

  async unbufferedCaptureException(err: Error): Promise<void> {
    const backend = this.client.getBackend()
    const event = await backend.eventFromException(error)
    await this.client.processEvent(event, finalEvent => backend.sendEvent(finalEvent))
  }

  // ...
}

@LinusU we'll most likely create a specific serverless package for this scenario. We just need to find some time, as it's the end of the year and things are getting crowdy now. Will keep you posted!

we'll most likely create a specific serverless package for this scenario

That would be amazing! 😍

@mtford90

when exactly would I use this better solution? As far as I know it's not possible to know when the lambda will be shutdown - plus it seems silly to wait for an arbitrary amount of time for shutdown to allow sentry to do its thing - especially on expensive high memory/cpu lambda functions.

(talking about draining)

It's meant to be used as the last thing before closing down the server process. Timeout in drain method is maximum time that we'll wait before shutting down the process, which doesn't mean that we will always use up that time. If the server is fully responsive, it'll send all the remaining events right away.

There's no way to know this per se, but there's a way to tell the lambda when it should be shut down using handler's callback argument.

Also @LinusU, I re-read your previous comment, specifically this part:

Would it be possible to have a special method that sends the event as fast as possible, and returns a Promise that we can await? That would be perfect for the serverless scenarios!

This is how we implemented our buffer. Every captureX call on the client, will add it to the buffer, that's correct, but it's not queued in any way, it's executed right away and this pattern is only used so that we can get the information if everything was successfully sent through to Sentry.

https://github.com/getsentry/sentry-javascript/blob/0f0dc37a4276aa2b832da451307bc4cd5413b34d/packages/core/src/requestbuffer.ts#L12-L18

This means that if you do something like this in AWS Lambda (assuming you want to use default client, which is the simplest case):

import * as Sentry from '@sentry/browser';

Sentry.init({ dsn: '__YOUR_DSN__' });

exports.handler = (event, context, callback) => {
    try {
      // do something
    catch (err) {
      Sentry.getCurrentHub()
        .getClient()
        .captureException(err)
        .then((status) => {
          // request status
          callback(null, 'Hello from Lambda');
        })
    }
};

You can be sure that it was sent right away and there was no timing/processing overhead.

@kamilogorek
Does this mean something like this should work in a async/await handler (where you don't use the callback)?

import * as Sentry from '@sentry/node';

Sentry.init({ dsn: '__YOUR_DSN__' });

exports.handler = async (event, context) => {
    try {
      // do something

      return 'Hello from Lambda';
    catch (err) {
      await Sentry.getCurrentHub().getClient().captureException(err);
      return 'Hello from Lambda with error';
    }
};

@jviolas totally! :)

Seems like the following changes would work for me then ☺️

-import Raven = require('raven')
+import * as Sentry from '@sentry/node'

 // ...

-Raven.config(config.SENTRY_DSN)
+Sentry.init({ dsn: config.SENTRY_DSN })

 // ...

 app.use((err: any, req: express.Request, res: express.Response, next: express.NextFunction) => {
   let status = (err.status || err.statusCode || 500) as number

   if (process.env.NODE_ENV === 'test') {
     return next(err)
   }

   if (status < 400 || status >= 500) {
-    Raven.captureException(err, () => next(err))
+    Sentry.getCurrentHub().getClient().captureException(err).then(() => next(err))
   } else {
     next(err)
   }
 })

To be honest, every line got a little bit uglier 😆 but I guess that it's better under the hood...

@kamilogorek I couldn't find getCurrentHub() in the docs on your website, is this API guaranteed not to break without a major semver bump? ❤️

@kamilogorek I couldn't find getCurrentHub() in the docs on your website, is this API guaranteed not to break without a major semver bump? ❤️

Yes, it's guaranteed. It's the part of the @sentry/hub package which is described here - https://docs.sentry.io/enriching-error-data/scopes/?platform=browser

We are discussing kinda "advanced uses" here in this thread and we haven't got to the point of documenting them yet. We'll do this eventually :)

Clearly what we're missing here is some documentation and good practice in this kind of advanced use cases. It'll be really good when it'll be documented or even a blog post can be a good start.
Otherwise, the new SDK is really simple to use and the unification is really nice.

@kamilogorek
Does this mean something like this should work in a async/await handler (where you don't use the callback)?

import * as Sentry from '@sentry/node';

Sentry.init({ dsn: '__YOUR_DSN__' });

exports.handler = async (event, context) => {
    try {
      // do something

      return 'Hello from Lambda';
    catch (err) {
      await Sentry.getCurrentHub().getClient().captureException(err);
      return 'Hello from Lambda with error';
    }
};

Doing something as suggested above does work, except I am unable to add extra context. For example, if I do:

Sentry.configureScope(scope => {
   scope.setExtra('someExtraInformation', information);
});
await Sentry.getCurrentHub().getClient().captureException(err);

I will not actually see 'someExtraInformation' in Sentry.

Someone did suggest an alternative method at the top of this thread, and that works, but seems hacky (forcing a timeout).

Sentry.configureScope(scope => {
  scope.setExtra('someExtraInformation', information);
});
Sentry.captureException(error);
await new Promise(resolve => Sentry.getCurrentHub().getClient().close(2000).then(resolve));

@kamilogorek @jviolas

import * as Sentry from '@sentry/node';

Sentry.init({ dsn: '__YOUR_DSN__' });

exports.handler = async (event, context) => {
   try {
     // do something

     return 'Hello from Lambda';
   catch (err) {
     await Sentry.getCurrentHub().getClient().captureException(err);
     return 'Hello from Lambda with error';
   }
};

Can this be also applied to _uncaught exceptions_ ? It seems modifying the Sentry.Integrations.OnUncaughtException integration is the official way to do so but the documentation is pretty poor right now.

+1 for this. At least having something officially documented would be good. Serverless is growing fast as of 2019, I really want to see official support from Sentry on it. One of the ideas I read here and really have liked was to having something like Sentry.flush() to send all the events that are queued.

@rdsedmundo Can you elaborate why this approach isn't working for you?

import * as Sentry from '@sentry/node';

Sentry.getCurrentHub().getClient().close(2000).then(result => {
      if (!result) {
        console.log('We reached the timeout for emptying the request buffer, still exiting now!');
      }
      global.process.exit(1);
})

This is our official approach and basically Sentry.flush().
ref: https://docs.sentry.io/error-reporting/configuration/draining/?platform=javascript

@HazAT The problem with that comes when you think about AWS Lambda container reuse. Which in TL;DR terms means that a process that just served a request can serve a new brand one if it's made on a short window of time. If I close the connection with this snippet you gave, and the container is reused, I'd need to manage to create a new hub for the new request. I can easily see this getting tricky. That's why a simple await Sentry.flush() would be a better solution:

import Sentry from './sentry'; // this calls Sentry.init under the hood

export const handler = async (event, context) => {
  try {
    ...
  } catch (error) {
    Sentry.captureException(error);
    await Sentry.flush(); // could even be called on the finally block

    return formatError(error);
  }
}

@rdsedmundo I am not sure if I maybe misunderstanding something but if you do

import Sentry from './sentry'; // this calls Sentry.init under the hood

export const handler = async (event, context) => {
  try {
    ...
  } catch (error) {
    Sentry.captureException(error);
    await Sentry.getCurrentHub().getClient().close(2000);

    return formatError(error);
  }
}

It's exactly like await Sentry.flush only that you define the timeout.

The promise resolves after 2000ms for sure with false if there was still stuff in the queue.
Otherwise close will resolve with true if the queue has been drained before the timeout is reached.

Or will the container be reused before all promises are resolved? (I can't imagine that)

@HazAT isn't the problem that close(...) will prevent the client from being used again? Lambda reuses the same Node process so the calls would be something like this, which I guess will stop working after the first call to close?

  • Sentry.init()
  • Sentry.captureException()
  • Sentry.getCurrentHub().getClient().close()
  • Sentry.captureException()
  • Sentry.getCurrentHub().getClient().close()
  • Sentry.captureException()
  • Sentry.getCurrentHub().getClient().close()
  • Sentry.captureException()
  • Sentry.getCurrentHub().getClient().close()
  • ...

No, close doesn't dispose the client, it's just here for draining the transport queue.
I agree that the name close in this context may be misleading but at least in JS/Node close doesn't do anything with the client and it's perfectly fine to still use it afterward.

Edit: If that was actually the "issue" I will update the docs to make this clear.

Cool. But the documentation is wrong then:

After shutdown the client cannot be used any more so make sure to only do that right before you shut down the application.

OK, we just discussed this matter internally in the team.
You guys were right and while JavaScript right now doesn't behave the way we documented it 🙈 we will introduce a flush function which will do exactly what you expect.

So right now you can use close without any issues (not sure if we are going to change it to dispose/disable the client in the future).
But there will be a flush function which is there to _just_ flush the queue.

I will update this issue once the feature landed.

Since I got a bit lost in all of these comments, is this how Express error handler (mimicing the one from this repo) should look like?

function getStatusCodeFromResponse(error) {
    const statusCode = error.status || error.statusCode || error.status_code || (error.output && error.output.statusCode);
    return statusCode ? parseInt(statusCode, 10) : 500;
}

app.use(async (err, req, res, next) => {
    const status = getStatusCodeFromResponse(err);

    if (status >= 500) {
        Sentry.captureException(err)

        await Sentry.getCurrentHub().getClient().close(2000)
    }

    next(err)
})

It looks like it's working and it doesn't lose extra data as in @rreynier's code.

Personally I feel that

await Sentry.getCurrentHub().getClient().captureException(err)

is cleaner than:

Sentry.captureException(err)
await Sentry.getCurrentHub().getClient().close(2000)

close really reads like it will close the client...

Full example:

import * as Sentry from '@sentry/node'

// ...

Sentry.init({ dsn: config.SENTRY_DSN })

// ...

app.use((err: any, req: express.Request, res: express.Response, next: express.NextFunction) => {
  let status = (err.status || err.statusCode || 500) as number

  if (process.env.NODE_ENV === 'test') {
    return next(err)
  }

  if (status < 400 || status >= 500) {
    Sentry.getCurrentHub().getClient().captureException(err).then(() => next(err))
  } else {
    next(err)
  }
})

@LinusU I tried that and for some reason, it doesn't send extra data along with the stack trace. It basically sends just stack trace. No info about user, OS or anything.

Aha, that's not good at all 😞

While we wait for flush, as a more reliable workaround than both of the above options you can report and wait for the result, _and_ include the scope, using the below snippet:

const scope = Sentry.getCurrentHub().getScope();
await Sentry.getCurrentHub().getClient().captureException(error, scope);

I'm using this, and it seems to work reliably for me, with reported errors including everything I'd expect.

I'm actually using all this with Netlify Functions, but the theory is the same with Lambda etc. I've written up a post with the full details of how to get this working, if anybody is interested: https://httptoolkit.tech/blog/netlify-function-error-reporting-with-sentry/

I use this helper in all my lambdas currently.

@pimterry Isn't this basically the same solution as @LinusU suggested? I've tried it and it doesn't send extra data as well.

This approach has worked out for me thus far @ondrowan

@ondrowan it's the same, but manually grabbing and including the current scope. That should be enough to get you working exceptions though I think. With the previous version, I was getting unlabelled events, and now with this change my exceptions come through with all the extra normal details.

@vietbui @albinekb @Andriy-Kulak @LinusU @dwelch2344 @jviolas @rreynier @guillaumekh @rdsedmundo @ondrowan @pimterry @zeusdeux not sure who's still interested in this use-case, so excuse me if I shouldn't call you.

Starting 4.6.0, there's no more client/hub dance. You can just call any our captureX method and then use Sentry.flush() to await the response once everything is sent to the server. All scope/extra data should be preserved without any dev interaction.

Here's an example with succeeded/timed-out requests.

image

Hope it helps! :)

Nice!

Are there still plans on making a minimal package just for capturing exceptions from Lambda and other serverless solutions? I think that that would still be a really nice addition ❤️

@LinusU hopefully yes, but we are swamped with other languages SDKs right now 😅

Thanks all for all the possible solutions, tl;dr for everyone coming here

Use: await Sentry.flush() to send all pending requests, this has been introduced in 4.6.x.

Closing this, please feel free to open a new issue in case anything is missing (but this thread is already super long).

Cheers 👍 🎉

@kamilogorek Hey! A quick fyi, I am using Sentry.flush in my app in place of the old workaround and none of the errors are being reported. I am currently reverting back to the old workaround from the updated flush method.

@zeusdeux is there any way you could provide some debug info/repro case for this?
You are overriding captureException method which adds the event to the buffer, and then you should await on the flush return value. Have you tried to use it the "regular way"?

@kamilogorek I wish I had debug info but there's nothing in the logs. I always did await on the overriden captureException. By the regular way, do you mean without overriding captureException?

@zeusdeux exactly, just call our native Sentry.captureException(error) without any overrides.

So your helper will be:

import * as Sentry from '@sentry/node'

export function init({ host, method, lambda, deployment }) {
  const environment = host === process.env.PRODUCTION_URL ? 'production' : host

  Sentry.init({
    dsn: process.env.SENTRY_DSN,
    environment,
    beforeSend(event, hint) {
      if (hint && hint.originalException) {
        // eslint-disable-next-line
        console.log('Error:', hint.originalException);
      }
      return event;
    }
  })

  Sentry.configureScope(scope => {
    scope.setTag('deployment', deployment)
    scope.setTag('lambda', lambda)
    scope.setTag('method', method)
  })
}

and in the code you call it:

import * as Sentry from '@sentry/node'

try {
  // ...
} catch (err) {
  Sentry.captureException(err);
  await Sentry.flush(2000);
  return respondWithError('Something went wrong', 500);
}

@kamilogorek I'll give it a go and report back. Also, thanks for the tip on beforeSend ^_^

await Sentry.flush(2000);

is also ~not~ working for me.

@tanduong can you provide repro case? Just stating that it doesn't work isn't too helpful 😅

@kamilogorek actually, I just found out that

await Sentry.getCurrentHub().getClient().close(2000)

doesn't work for me either because my lambda function is attached to VPC.

I confirm that

await Sentry.flush(2000);

is actually working.

BTW, so how would you deal with lambda in VPC? Attach to a NAT gateway? I just want Sentry but not the public internet.

@tanduong Sentry is on the public internet, so yes you need to have NAT gateway if your lambda is running within your VPC. Otherwise you would have to explore hosted Sentry option.

What's the flush(2000) actually doing? I had this code working mostly fine but now I have a couple captureMessage calls happening concurrently it's timing out every time!

Flushing the internal queue of messages over the wire

Ok, that makes total sense. I think my issue then it's that this promise never returns when there's nothing else to flush? Whenever I run my wrapped captureException fn concurrently it times out my handler.

export const captureMessage = async (
  message: string,
  extras?: any,
): Promise<boolean> =>
  new Promise((resolve) => {
    Sentry.withScope(async (scope) => {
      if (typeof extras !== 'undefined') {
        scope.setExtras(extras)
      }
      Sentry.captureMessage(message)
      await Sentry.flush(2000)
      resolve(true)
    })
  })

await Sentry.flush() doesn't really finish after the first captureMessage call.

I have what I believe is a similar issue as @enapupe. If you call await client.flush(2000); in parallel only the first promise is resolved. This can happen in AWS lambda contexts where the client is reused among multiple concurrent calls to the handler.

I am using code like this:

 let client = Sentry.getCurrentHub().getClient();
  if (client) {
    // flush the sentry client if it has any events to send
    log('begin flushing sentry client');
    try {
      await client.flush(2000);
    } catch (err) {
      console.error('sentry client flush error:', err);
    }
    log('end flushing sentry client');
  }

But when I make two calls to my lambda function in rapid succession, I get:

  app begin flushing sentry client +2ms
  app begin flushing sentry client +0ms
  app end flushing sentry client +2ms

You can see that the second promise is never resolved.

@esetnik I've filed an issue on that: https://github.com/getsentry/sentry-javascript/issues/2131
My currrent workaround is a wrapper flush fn that always resolves (based on a timeout):

const resolveAfter = (ms: number) =>
  new Promise((resolve) => setTimeout(resolve, ms))

const flush = (timeout: number) =>
  Promise.race([resolveAfter(timeout), Sentry.flush(timeout)])

@enapupe I added a note about your workaround in #2131. I believe it will cause a performance regression on concurrent flush.

In case anybody is having any issues.
This works beautifully

@SarasArya @HazAT
First of all... Thanks for sharing your solution! :)
There is one callback of configureScope method that I guess it is supposed to be called before captureException but it is not being done in the same "thread".
Couldn't this lead to the appearance of race conditions?

@cibergarri I don't think so, looks synchronous to me, in case you have an async method in there, then there would be race conditions.
Consider is like .map of array's that the same thing happening here. In case you have issues wrapping your head around it. I hope that helps.

Yeah, it's totally fine to do that

Update: Sentry now supports automated error capture for Node/Lambda environments: https://docs.sentry.io/platforms/node/guides/aws-lambda/

I'm using @sentry/serverless like this:

const Sentry = require("@sentry/serverless");
Sentry.AWSLambda.init({
  dsn: process.env.SENTRY_DSN,
  tracesSampleRate: 1.0,
  environment: appEnv
});

exports.main = Sentry.AWSLambda.wrapHandler(async (event, context) => {
     try{
           //my code
     }catch(error){
          Sentry.captureException(error);
          await Sentry.flush(3000);
     }

});

It does not work on lambda.
In my testing env it was working but in prod where there is a lot a concurrent executions and the containers are reuse it is logging just about 10% of the total amount.

Any advice?

@armando25723

Please tell how did your measure that it loses the exception events? Do you have a code sample of how such lost exception was thrown? Need more context.

const Sentry = require("@sentry/serverless"); // "version": "5.27.3"
Sentry.AWSLambda.init({
  dsn: process.env.SENTRY_DSN,
  tracesSampleRate: 1.0,
  environment: appEnv
});
exports.main = Sentry.AWSLambda.wrapHandler(async (event, context) => {
     try{
           throw new Error('Test Error');
     }catch(error){
          Sentry.captureException(error);
          await Sentry.flush(3000);
     }
});

What is happening?
If the function is invoked several times with short interval between invocations the event is only logged once.
If the time interval between invocations is larger all events are logged.

I assume that the problem is when the invocation is over a reused container.

I have tried also
await Sentry.captureException(error);
and:
await Sentry.flush();
and without flushing
same result

@marshall-lee what do you recomend? Should I create an issue, I'm stuck here.

@armando25723 Looks like the server is responding with 429 (too many exceptions) while sending these events. We throw that in case of over quota/rate limiting scenarios. Do you know if you are sequentially sending errors or over quota? We can debug further if you think these are real error events getting dropped and you are under our 5k limit for the free tier.

@ajjindal all other projects are working fine with sentry. The organization slug is "alegra", project name is mail-dispatch-serverless under #mail-micros. We have been using sentry for a long time, but first time with serverless. This is not free tier, I can't tell you exactly which plan we are using but it is a paid one.
It will be nice if you could help me to debug further.
Thanks for reply : )

PD: is Team Plan

Was this page helpful?
0 / 5 - 0 ratings