Autofixture: NCrunch and AutoFixture integration issues

Created on 23 Aug 2017  ·  15Comments  ·  Source: AutoFixture/AutoFixture

Currently NCrunch doesn't support AutoFixture and that's clearly stated in their documentation. I fired an issue to xUnit and tried to solve that somehow, however it appeared to be more complex problem.

I'm going to invite NCrunch developers here to discuss that problem together. Probably, we could also move that to xUnit surface further.

We should work find a way to work with NCrunch given that both we are quite popular products 😅

question

Most helpful comment

@remcomulder @zvirja Just wanted to say that this discussion (which helped me understand a similar issue), and the dedication you've demonstrated here, is nothing but awesome! Thank you both for your great products.

All 15 comments

Hi, I'm the developer of NCrunch.

I'd be happy to work with you to find a solution to this. With the current design, I'm sadly out of options for solving the problem solely on my side of the integration point. I believe that my runner is likely not the only one that has experienced problems with the current design.

As you've likely understood through the various documentation and support forum posts I've written up, the root of the problem lies in AutoFixture's random generation of test parameters. Because test parameters are a critical element in identifying a test and subsequently executing it, selective execution of tests becomes impossible if the parameters change every time a test is constructed/discovered.

The only reliable way I can think of to solve this issue would be to remove all random generation of test parameters and instead use fixed values (i.e. placeholders or consts) instead, or otherwise limit the seeding of parameter values to the test case itself. In this way, the tests would always be exactly the same and they could be consistently found the same as any other test. Every user I have dealt with that has used AutoFixture for parameter generation has done so for parameters they don't 'care' about for the purposes of their testing, so I hope this approach might not have any real downside in the eyes of the user. A benefit to this is that it would also work with all versions of NCrunch immediately and would not require any code changes to NCrunch or any other runner.

@remcomulder Thank you a lot for participating here - highly appreciated! I've investigated that a bit and would like to share my findings. All my findings are applicable to xUnit only.

TL DR: xUnit supports such theories and NCrunch should support them for xUnit as well. For NUnit - that's an open question and I haven't investigated that yet.


The feature we use seems to be _legal_ for xUnit. We have our own TestDataDiscoverer that indicates that our theories cannot be pre-discovered (because we generate random data). Later we decorate our AutoDataAttribute with this discoverer. xUnit respects this attribute and don't resolve parameter values during the discovery.

I believe that my runner is likely not the only one that has experienced problems with the current design.

Actually, that is not true and both R# and VS work fine with such theories. They also allow to re-run the particular theory even if it contains auto-generated data. I'd suggest to focus on VS as it also contains discovery & run phases and is open-sourced.

Consider the following test code:

using Ploeh.AutoFixture.Xunit2;
using Xunit;

namespace Playground
{
    public class UnitTest
    {
        [Fact]
        public void StableTest()
        {
            Assert.True(true);
        }

        [Theory]
        [InlineData(1)]
        public void StableInlineTest(int value)
        {
            Assert.Equal(1, value);
        }

        [Theory, AutoData]
        public void VolatileTest(int value)
        {
            Assert.True(value % 2 == 0);
        }

        [Theory]
        [InlineAutoData(10)]
        [InlineAutoData(20)]
        [InlineAutoData(30)]
        public void VolatileTestWithInline(int value, int autoValue)
        {
            Assert.NotEqual(value, 40);
        }
    }
}

.NET Framework test library project. VS 2017.3. Target framework: 4.5.2. Installed packages:

  • xunit 2.2.0
  • xunit.runner.visualstudio 2.2.0
  • AutoFixture 3.50.6
  • AutoFixture.Xunit2 3.50.6

If you trigger discover in VS, you will see the following output:
image

As you might notice, for the theories that support data discovery (StableInlineTest), VS runner shows the actual data the test will be run with (1). For tests that doesn't support data discovery and contain auto-generated data (VolatileTest, VolatileTestWithInline) VS doesn't discover the theory cases and shows you the whole theory only. Only after you run, you will be able to see the values for this particular invocation:
image

Now you can re-run the particular theory and it will run _all the theory cases_ again.

As you can see, there is actually a way to support such theories and xUnit does that perfectly. NCrunch should take into account a fact that some theory cases cannot be pre-discovered. For such theories you need to re-run the whole theory, rather than a particular case. I don't see why that is not possible.

The only reliable way I can think of to solve this issue would be to remove all random generation of test parameters and instead use fixed values (i.e. placeholders or consts) instead

Currently xUnit doesn't expose API to change display name and replace generated data with placeholders. I've created an issue for them (see here), however Brad's reply is that it's unreal and they suggest to simply disable the discovery, which we are doing already.

or otherwise limit the seeding of parameter values to the test case itself.

Unfortunately, that is not currently possible for our product and a lot of things should be rewritten to support a singe seed.


Member data

In the documentation here you have another sample (I've rewritten it to XUnit):

public class MemberDataSample
{
    public IEnumerable<object[]> GetData()
    {
        yield return new object[]
        {
            DateTime.Now
        };
    }

    [Theory, MemberData(nameof(GetData), DisableDiscoveryEnumeration = true)]
    public void DateTheory(DateTime dt)
    {
        Assert.True(DateTime.Now - dt < TimeSpan.FromMinutes(1));
    }
}

It's absolutely legal for xUnit as there is DisableDiscoveryEnumeration attribute. It works like the sample above - theory cases are not being pre-discovered.


The bottom line

It seems that xUnit provides instruments to understand whether test is volatile or not (by means of pre-discovery enumeration support). You can use VS implementation as a sample to understand how they handle that and do the same in your product. Likely, they simply use xUnit SDK and their message sinks.

Given that both R# and VS supports fine such theories, it makes me think that actually everything is right with our product..

As for the NUnit - let's discuss that afterwards as I haven't investigated that yet. Probably, we don't have such an API for that.

Could you please share your opinion regarding xUnit support given my findings? Will you add support for
xUnit (stop re-run all tests) and stop to show that incompatibility warning? 😉

It seems I may owe you an apology here. The use case you've described above works correctly under NCrunch, for the reasons you've explained. Xunit is avoiding pre-enumeration of the theory and is collapsing it into a single test where it is safely identified and executed. On testing this now, I can confirm that NCrunch does this correctly. It seems we do not have an obvious problem with InlineAutoData.

I am not sure why this failed for me earlier. I am presently unable to myself create a scenario in which it fails. I know I have had users mention to me that InlineAutoData wasn't working, though I guess they'll need to come forward and provide examples of where this is the case.

I'd like to draw your attention to a specific use case that I am aware will break both NCrunch and the VS Runner. I assume it would also break ReSharper and TD.NET, though I haven't tested these as I don't have them installed:

using Ploeh.AutoFixture;
using Ploeh.AutoFixture.Xunit2;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Xunit;

namespace XUnitAutoFixture
{
    public class TestFixture
    {
        private static readonly Fixture Fixture = new Fixture();

        public static IEnumerable<object[]> SomeTestData()
        {
            yield return new object[] { Fixture.Create<string>() };
            yield return new object[] { Fixture.Create<string>() };
        }

        [Theory, MemberData(nameof(SomeTestData))]
        public void Test(object value)
        {

        }
    }
}

The above code will fail under all selective execution scenarios. It will, however, pass if the tests are discovered and executed in the same step. This scenario was the catalyst behind the warning being given by NCrunch about AutoFixture, as users were doing this and asking for support. Given that I didn't know about AutoFixture disabling pre-enumeration, I had assumed (incorrectly) that InlineAutoData was the same.

My present assumption is that you do not support such a scenario. Is this correct?

On testing this now, I can confirm that NCrunch does this correctly.

That's great! Happy to hear that actually xUnit is fully supported by NCrunch 🎉 As for the NUnit - it has been always clumsy and we need to investigate what we can do there.

However, I still observe issues with NCrunch and AutoFixture. Currently I have xUnit project with AutoFixture and if I change even a single line, all the tests are re-run. It seems that you enable such behavior when you detect AutoFixture to ensure that nothing is missed.

Is that so? If yes, could you please fix that to disable such a behavior for xUnit as everything is fine there?


I'd like to draw your attention to a specific use case that I am aware will break both NCrunch and the VS Runner.
My present assumption is that you do not support such a scenario. Is this correct?

Yep, that scenario will break all the runners. However, as it was been correctly pointed out somewhere on your forum, that is not because of the AutoFixture, as you might write something like this which will also don't work:

public class TestFixture
{
    public static IEnumerable<object[]> SomeTestData()
    {
        yield return new object[] { DateTime.Now.Ticks };
        yield return new object[] { DateTime.Now.Ticks };
    }

    [Theory, MemberData(nameof(SomeTestData))]
    public void Test(object value)
    {

    }
}

For such scenario it's your responsibility to manually disable the pre-discovery, so that it should look like this (pay attention to the DisableDiscoveryEnumeration property):

public class TestFixture
{
    private static readonly Fixture Fixture = new Fixture();

    public static IEnumerable<object[]> SomeTestData()
    {
        yield return new object[] { Fixture.Create<string>() };
        yield return new object[] { Fixture.Create<string>() };
    }

    [Theory, MemberData(nameof(SomeTestData), DisableDiscoveryEnumeration = true)]
    public void Test(object value)
    {

    }
}

Actually, it's supposed to be very rare scenario as we have AutoData and InlineAutoData attributes for data generation. Also AutoFixture is simply a tool and it's developer's responsibility to use it correctly.

I wouldn't move NCrunch into a special mode simply because there are people who can use tool incorrectly. The best we can do here is to put a Known Issue somewhere to describe this particular scenario and ask people to use xUnit properly (as they might not know about that).


Could you please confirm that you run tests in a special way if you detect AutoFixture and if so, could you please disable that mode for xUnit (for NUnit it's better to keep that as is). Also likely this page should be also updated.

At the moment, NCrunch implements no special handling for AutoFixture outside of the compatibility warning. The behaviour you've experienced is probably due to the engine mode you have selected. It's entirely configurable - if you switch to 'Run impacted tests automatically, others manually' mode I think you'll see the engine behave in the way you're expecting.

With what I've learnt from you, I feel ready to remove the AutoFixture warning from NCrunch entirely. I had already been convinced by users to reword it, as it became clear very early on that the warning was overly broad and some features of AutoFixture were still working correctly. I think I've quite seriously misunderstood how AutoFixture is implemented under xunit.

So I guess this is probably a best case scenario for us. The use case that most badly broke me isn't technically supported anyway, and everything else is working fine. Meanwhile, I'll be happy to try to forget my own embarrassment at having tested and verified this to an incorrect conclusion, if you can find cause to forgive me for the overzealous compatibility warning.

Meanwhile, I'll be happy to try to forget my own embarrassment at having tested and verified this to an incorrect conclusion, if you can find cause to forgive me for the overzealous compatibility warning.

No worries! It's absolutely fine and we are all here to help each other understand the things 😅 That's great that you followed up the ticket and we discussed that - it's too much 🥇

With what I've learnt from you, I feel ready to remove the AutoFixture warning from NCrunch entirely.

Well, we still have troubles with NUnit (we are on our way though), so this message looks to be relevant for NUnit projects. Probably, it make sense to disable that for xUnit only, unless we introduce full support for the NUnit (or at least a way to enable that support).

I still have one thing that is unclear to me. What does it mean that you don't support e.g. AutoFixture & NUnit? Yes, the test names are different for each time, however does it matter if you re-run all the tests (if engine mode is set so)? Or does it mean that you don't support the "Impacted only" engine more for them? My thought was that instead of saying No support, it's probably better to narrow down to some particular scenarios which we don't support, while rest ones should be fine.

'Run impacted tests automatically, others manually'

I wasn't able to find this setting on my 3.10.0.20 installation. Do you mean the Only consider tests 'Out of date' if they are 'Impacted' setting that should be set to true? Sorry if I missed that somewhere - I'm a bit new to this product..


Documentation update

It probably makes sense to not _remove_ case with xUnit and AutoFixture from this page, but instead describe how to use it correctly (use the DisableDiscoveryEnumeration attribute). Also it would be cool to describe the "NUnit Test Case With Inconsistent Name" sample for xUnit and ask to use the DisableDiscoveryEnumeration attribute together with MemberDataAttribute.

I still have one thing that is unclear to me. What does it mean that you don't support e.g. AutoFixture & NUnit? Yes, the test names are different for each time, however does it matter if you re-run all the tests (if engine mode is set so)? Or does it mean that you don't support the "Impacted only" engine more for them? My thought was that instead of saying No support, it's probably better to narrow down to some particular scenarios which we don't support, while rest ones should be fine.

This is because of the lifespan of the test under NCrunch. NCrunch has important state that is assigned to every test (think pass/fail result, code coverage data, performance data, trace output, etc). This data persists for as long as the test continues to be 'discovered' by the test framework, even between sessions of VS. When the test framework reports no test with the same identifier, the test is considered to be gone and all state is destroyed.

When a test is created using unstable parameters, each call into the test framework to discover tests results in a whole new test being created, because the test's identifier has changed. The result is that every time NCrunch calls NUnit to discover tests (consistently after every build of the test project), all state held for tests with unstable parameters is discarded. So that's bad. It means that impact detection won't work, and the engine does a ton of extra work rerunning tests and flicking through transient results.

The problem goes deeper though. If the dumping of test state were the only real issue, handling for unstable tests could still 'work' in the sense that they would still be run by the engine and results would be reported. The deeper problems come from NCrunch's parallelisation, selective execution and distributed processing.

To perform parallel execution, NCrunch needs to use multiple test processes that execute tests in parallel. The mechanics of NUnit are such that tests must be discovered before they can be executed. This means that there must be a whole separate discovery step executed inside each process used for execution, so if we have two processes, then we need to discover the tests twice. If the tests have unstable parameters, each process will have an entirely different set of tests, making it impossible for the engine to split the full master list of tests between the processes for execution. This problem is also extended when using distributed processing, because the remote execution process is running on different hardware in a completely different environment.

There is also the issue of selective execution. NCrunch's default mode of operation is to always create a new test process when it is specifically told to run a test. This is to clear the slate and be as consistent as possible with other runners. Such a feature can't work with unstable parameters, because spawning a new test process requires rediscovering tests, which subsequently cannot be identified if their parameters have changed.

NUnit does have its own internal ID system that could be used to identify tests with unstable parameters between processes, but this ID system is based on test generation sequence (i.e. it's incremental). Such a system can't be used by any test runner that needs to manage test state across multiple versions of a test assembly, because if the user creates a new test, all the IDs will fall out of sequence and the data will become dangerously misleading. The NUnit devs have expressed interest in moving away from this sequence-based system and towards IDs generated from the test attributes themselves, which would be similar to how Xunit does it (and likely wouldn't work with unstable parameters).

I still believe that the best way to resolve these issues is to perform consistent seeding of the unstable parameters. There is always a concept that tests should be repeatable and consistent, which is difficult to achieve if the tests are fully randomising all their inputs. In actual practice, a test that generates randomly seeded data for its execution is a whole new test every time it is generated, as the code can behave differently based on the data it is fed.

I wasn't able to find this setting on my 3.10.0.20 installation. Do you mean the Only consider tests 'Out of date' if they are 'Impacted' setting that should be set to true? Sorry if I missed that somewhere - I'm a bit new to this product..

Go to the NCrunch menu, choose 'Set engine mode', and you should see the option there. If it isn't there, it's possible that you're using a solution that was executed by a much older version of NCrunch and it's only showing legacy engine modes. Just creating a new solution somewhere should solve this.

@remcomulder Wow! Thanks for such a detailed explanation! Now I see that indeed it's easier to say that AutoFixture & NUnit is not being currently supported as there are some HUGE issues under the hood 😅

I still believe that the best way to resolve these issues is to perform consistent seeding of the unstable parameters.

Actually, in this PR we have a bit different idea. We want to alter the test name so that it's stable. For instance, given test like this:

    [Test, StableNameAutoData]
    public void Sample(int a, Data d1, DataWithToString d2, ISomeData d3)
    {
        Assert.IsTrue(true);
    }

the test name will be:

NUnit3RunnerTest709.TestNameTester.Sample(auto<System.Int32>,auto<NUnit3RunnerTest709.Data>,auto<NUnit3RunnerTest709.DataWithToString>,auto<Castle.Proxies.ISomeDataProxy>)

Name will be always the same during each discovery/execution, while the actual argument values will differ for each time.

Given your deep knowledge of NUnit, how would you evaluate this approach? Will it work for you? I expect that you use a theory name, rather than bind to particular argument values. Therefore, if test name is stable, you should not have any troubles as it's identifiable now. Could you please confirm before we start to implement this? ☺️

There is always a concept that tests should be repeatable and consistent, which is difficult to achieve if the tests are fully randomising all their inputs.

Probably, one day we'll support a stable seed so that tests can be replayed with same arguments, however currently we are pretty far from that, so it's not realistic 😕 Rather, we expect that users' assertions will be precise enough to later understand why test failed even if input was randomized in some ranges.

Given your deep knowledge of NUnit, how would you evaluate this approach? Will it work for you? I expect that you use a theory name, rather than bind to particular argument values. Therefore, if test name is stable, you should not have any troubles as it's identifiable now. Could you please confirm before we start to implement this? ☺️

Unfortunately my knowledge of NUnit is far from deep :( NCrunch basically just takes the name NUnit returns to it, and uses this to generate the identifier. So in theory, as long as your solution did change/stabilise the physical name of the test as returned by the NUnit API, then we should be OK here
for NCrunch at least.

Something to watch out for is the potential for a user to create multiple tests with the same signature. If the parameters in the name are being stripped down to raw types, this becomes much more likely/possible. As long as you're aware of these scenarios, you could probably code in an error or something to discourage it.

@remcomulder Just to update you, so we can finally close this issue.

As it has been discussed above, the xUnit framework is being natively supported. However, for NUnit the issue wasn't clear as it doesn't support tests with volatile names.

We recently merged a PR and released a new version of AutoFixture (v3.51.0) which add support for stable names for NUnit. For now, the manual actions are required from user's side (see here), however in v4 I'm going to make it out-of-the-box.

I've just tested and found that if I use the approach above, NCrunch is able to run only modified tests and it seems that it now works correctly. I'm not sure whether some actions from your side are required (e.g. document that somewhere). Also would be fine if you test that new feature and let us know whether it works fine now, so we can have a rest 😄

@zvirja Thanks for looping me in on this. I've given this work a quick test, and it looks solid to me. By removing the random elements from the test names, the runner can consistently identify tests across sessions and all seems to be well :)

I do have an idea though that I think might save us some time in user support. One problem we all find with software is that many people don't read the docs before they pick up a tool. It's useful knowing that we are now able to advise them on how to solve the unique name issues under NUnit, but the best approach is always making the problem solve itself.

I note that by default, AutoFixture will use the VolatileNameTestMethodBuilder. I accept this is for reasons of backwards compatibility and I agree that this is sensible. But what if we were to allow this default to be overridden using an environment variable? If we could specify an environment variable (say, 'AutoFixture.FixedTestNames'=='1') to force AutoFixture to use VolatileNameTestMethodBuilder, it would be possible for a runner to specify the builder upfront, without the user needing to do anything. This would also be a bonus for people that use multiple runners, as they could implicitly use different builders for each runner (without any effort required) and get the best of both worlds.

What do you think?

I've given this work a quick test, and it looks solid to me.

That's awesome! 🎉 Meaning that we can close this issue as now everything works fine. Thank you for the testing and your participation here 🥇

What do you think?

I think I ended up with a much better and simpler solution - we'll make the FixedNameTestMethodBuilder a default strategy in v4 (our next major release) which will be released in upcoming month or two. The appropriate PR has been already approved, so code will be there. Therefore, we will work fine out-of-the-box and if somebody needs volatile test names - he will opt-in manually, clearly understanding the consequences.

Later we could do something like you suggested - add a switching strategy that inspects environment variables/AppConfig, however I prefer to do that only if that's indeed required.

I think I ended up with a much better and simpler solution - we'll make the FixedNameTestMethodBuilder a default strategy in v4 (our next major release) which will be released in upcoming month or two.

That will work great for me :) I'm happy! Thanks for all your effort.

Cool 😉 Thank you again for your replies and collaboration 👍

I'm closing this one as no actions are required more from both sides.

@remcomulder @zvirja Just wanted to say that this discussion (which helped me understand a similar issue), and the dedication you've demonstrated here, is nothing but awesome! Thank you both for your great products.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

ploeh picture ploeh  ·  3Comments

joelleortiz picture joelleortiz  ·  4Comments

Ephasme picture Ephasme  ·  3Comments

gtbuchanan picture gtbuchanan  ·  3Comments

Ridermansb picture Ridermansb  ·  4Comments