Nunit: Run test methods within a fixture in parallel

Created on 5 Aug 2014  ·  76Comments  ·  Source: nunit/nunit

Currently, we only have implemented parallel execution at the fixture level. It should be possible to run individual methods of a fixture in parallel as well as individual test cases for a parameterized method.

We should be able to specify the exact level of parallelism on the fixture or method.

This issue was previously part of #66

done hardfix feature high

Most helpful comment

While I could pretend to predict the future, that would only satisfy you until September. :-)

Let me answer by explaining how we "schedule" features.

We are currently trying (since start of year) an approach that has releases coming out once per quarter. As we get better at releasing, we may increase the pace. Eventually, we may be able to do continuous deployment.

In a volunteer Open Source project, there is no fixed amount of effort availble per month. We can use the "yesterday's weather" approach, but it turns out that the amount of time people have to spend on NUnit, as well as the number of people volunteering, is quite variable.

As a compromise, we select a small number of issues that are key to a release and add them to the milestone in advance. My preference is to add no more than about 25% of what we may hope to get done. The majority of issues in a release are only added to that milestone after they are done or at best when somebody commits to work on them. You generally will not find open issues in our 3.5 milestone without somebody assigned to them, although I do it occasionally if it looks like something is blocking other work.

So, the positive side of what we do is that we can virtually guarantee that the release will come out on time. The negative side is that we can't tell you what will be in it. :-(

For this particular issue... I've given it high priority in order to tell our developers that it's important. Somebody has to be free to work on it and have enough background for a project of this scope and difficulty. If somebody with the right background picks this up in the next few weeks, I would guess that it can be done by the release. As a guideline, I think it would take me about a month, while working on some other things as well, to work my way through this.

Sorry there's no more definitive answer but such an answer would necessarily be a lie!

All 76 comments

Update: Both this comment and the following one were in reply to an individual who apparently removed his own comments. I'm leaving my answers.

Well it's in the spec and scheduled for post-3.0 implementation. If it were so easy, we probably would have done it. Not sure what connection you see with mbunit. The fact that they had it doesn't help us.

This is getting a bit tiresome. A few points, already stated but apparently missed...

  1. There is a plan to implement what you are asking for.
  2. We decided as a team to schedule it at a certain point.
  3. When it is scheduled has primarily to do with our assessment of the value of the feature as compared with other features.
  4. There are several people on the NUnit team who could implement the feature.
  5. They would be able to implement it, depending on priorities, out of their heads and would not need to copy it from anywhere.

I find your talk of "reverse engineering" very disturbing. Open source is only possible in a context where copyright is respected. So if you are suggesting that we might ignore the mbunit licensing terms, you are way off base.

While I did contribute many patches to MbUnit and used it for years, I was
never a key contributor. I wouldn't want my status to be misrepresented :)

As for reverse engineering, there isn't really any use here. NUnit works
entirely different than MbUnit did. We will get to this issue in due time,
but are approaching it cautiously because other similar issues might change
the way we need to implement this or even conflict.
On Apr 19, 2015 2:45 AM, "CharliePoole" [email protected] wrote:

This is getting a bit tiresome. A few points, already stated but
apparently missed...

1.

There is a plan to implement what you are asking for.
2.

We decided as a team to schedule it at a certain point.
3.

When it is scheduled has primarily to do with our assessment of the
value of the feature as compared with other features.
4.

There are several people on the NUnit team who could implement the
feature.
5.

They would be able to implement it, depending on priorities, out of
their heads and would not need to copy it from anywhere.

I find your talk of "reverse engineering" very disturbing. Open source is
only possible in a context where copyright is respected. So if you are
suggesting that we might ignore the mbunit licensing terms, you are way off
base.


Reply to this email directly or view it on GitHub
https://github.com/nunit/nunit/issues/164#issuecomment-94244650.

A much more tactful comment than mine!

Our current priorities in the world of parallel execution are:

  1. Parallel process execution
  2. Exclusion groups
  3. Parallel method (in one fixture) execution.

This seems to represent the order of greatest usefulness to users.

I'll also mention that marking something as post-3.0 does not necessarily mean the feature comes at a later time than it would if we made it part of 3.0. Rather, it may mean that 3.0 comes at an earlier point in time.

When we do this, we are going to have to decide how to handle setup and teardown commands. The easiest option would likely be to construct the test class for each test so that there is no contention for data.

I would favor making that an option at some point anyway, but I'm not sure whether it's a run option or a fixture-by-fixture (attributed) option or both.

Hi!
The addition of this feature to the next version of NUnit would be great, since it is the only thing that prevents me from switching to NUnit. Is this feature still planned for 3.4?

@julianhaslinger NUnit 3.4 will be out at the end of the month, so no, this feature will not be included.

FYI, this issue is in our Backlog milestone (or pseudo-milestone) rather than 3.4 because we are following a practice of only adding a small number of key defining issues to each numbered milestone in advance.

The next milestone, 3.6, is scheduled to drop in 3 more months, which probably sounds discouraging to you. :-( However, if you see this issue being merged to master, you will be able to get an earlier drop from our MyGet feed.

@julianhaslinger this won't be in 3.4 which is probably out in just over a week, but it is one I would like to see done soon, so your vote helps :+1:

As an aside, which test framework are you using that supports parallel method execution? I thought XUnit only allowed running tests in parallel down to the Class level too. Am I mistaken? I don't use XUnit much :smile:

@CharliePoole Thanks for the answer! Ok, this sounds a bit discouraging, though. However, I will wait for the next releases (or earlier drops). Please (still) consider this feature in the next version (3.6).

@rprouse I'm using MbUnit/Gallio right now. The project pretty much died and now I'm going to switch to NUnit.

@julianhaslinger Is it a question of running time for you? Do you have some numbers on the distribution of test cases within fixtures? I ask because my assumption has been that the feature gives little to users beyond what they get from parallel fixtures. If that's wrong, it could change its relative priority.

@CharliePoole Exactly, our actual problem is the running time of the test cases. We would like to use NUnit for our automated / regression tests (including Selenium / WebDriver) and since some of our test classes have 200+ test cases, it's currently a real pain to execute the whole test suite with NUnit.

Comparisons (within a smaller test set of our regression tests) have shown that a possible NUnit solution will be executed in about 1,5 hours whereas the current MBunit solution will be executed in 0,5 hours (same test set).

However, @CharliePoole, I'm aware of the fact that we might be able to reduce execution time by splitting bigger test classes into smaller ones, but this would somehow undermine the initial idea of our test class hierarchy.

@julianhaslinger - Just to confirm, are you sure you're running time is CPU bound, and not memory bound?

We experienced the same issue when we first tried switching to NUnit 3 - run time became significantly longer. This was due to NUnit maxing out the available memory however - there's an issue with the current version which holds on to all test objects until the test run is complete. We're currently running a patched version internally - I wanted to get a PR in for 3.4, but not sure I'll have time at this point. :-(

Apologies if this isn't it, but thought it was worth a mention, as Selenium can use it's fair share of memory. :-)

@ChrisMaddock Thanks for the input! :+1: We will have a look at it soon and I will come back to you then.

@ChrisMaddock I would love to see your memory fix in 3.4. If you want to put up a PR, we can help you clean it up if it isn't ready for production yet :+1:

@rprouse - This initial commit https://github.com/nunit/nunit/pull/1367/commits/5f98ae51025f7af8244abd4367d1f47260874dfc in PR #1367 frees things up properly for us, in a patched v3.0.1.

You'll see in the PR that it was moved to OneTimeTearDown before merge - I don't yet know if that move caused it not to work in v3.2.1, or if there was another change working which reintroduced the retention - I may well not have retested after moving.

Will try and re-test it and throw it up tomorrow, would be great for us to go back to core releases as well. :-)

@ChrisMaddock I had a look at the memory consumption during a regression test run and found out that the execution of test runs is not memory-bound. I guess we really have to wait for a version of NUnit that can handle the parallel execution of individual methods inside each class.

just to add to what @julianhaslinger said, I've seen exactly the same issue in using NUnit with Selenium tests. I had to write a custom wrapper that runs individual instances of nunit-console.exe against a single test as scraped from a passed in assembly so I could run parallel tests for now. This is not ideal though because it results in many .xml output files being generated (one for each test run) rather than a single .xml output and it also means I cannot make use of _OneTimeSetUp_ and teardown methods and instead must rely on external semaphores and environment variables to control execution flow where I only want to run things once. I've been following the promise of having parallel execution in NUnit for a long time now (years) and was very disappointed at how the feature was rolled out in 3.0 (only partially supported and requiring some real digging in the issues and release notes to discover not everything was actually supported). I even went so far as to investigate what would be involved in moving this forward myself, but unfortunately it appears that NUnit's design is working against the implementation of parallel execution at the test method level, I've no idea how you'd actually like to handle the potential for non-threadsafe code (i.e. should it be left to the person using NUnit in parallel to ensure all test methods properly reference any externals in a threadsafe manner or should NUnit itself handle execution of each method in it's own domain while somehow trying to still only run the onetime setup and teardown type methods only once) and since I already have a workaround in place I could not justify the time in implementing this feature myself (which is probably for the best because I've been burned before when working with teams on GitHub to help them implement parallel execution). Anyway... sorry for rambling... I understand that the priority for Unit Testing of C# code probably doesn't rely on method-level parallelism as much, but please do know that for those of us using NUnit for Selenium-based tests it would be a HUGE improvement. Thanks in advance! :smile:

Thanks for your comments. I think I'll ramble on a bit myself here...

It helps to understand what people need and why they need it. Not being a web developer myself, I tried to listen to Selenium users and focus on what they wanted. What that seemed to be was parallel fixtures with ordering among the test cases. I fully grasp that different people will want different things, but when both groups present themselves as expressing "what Selenium users need" it's a bit confusing. Can you outline for me what sorts of web tests might want method-level parallelism as opposed to those folks who want to run methods in a pre-defined order? I think I could guess, but I'd rather have it from a user.

Regarding your workaround... did you consider using separate test assemblies? NUnit would be glad to run each one in parallel in a separate process. Of course that doesn't eliminate the need to synchronize access to things like files.

Regarding thread-safety - we never planned for our parallel implementation to take responsibility for thread-safety. The only thing I was expecting to guarantee is that NUnit won't create any thread-safety problems if your methods are already thread-safe. That in itself turns out to be non-trivial.

Thanks for the response. I completely understand your comments about
ordering of test execution in a fixture and splitting tests into separate
assemblies.

I am already a NUnit user and have been for years (and JUnit before that)
so the idea of relying on ordering of tests in a fixture wasn't even a
consideration in my test design. Each test tries to be a fully encapsulated
action or set of actions within the site with no expectation on order of
execution. The design of the site I'm testing also means that many tests
can take up to 20 minutes to execute (much of this time is spent in waiting
for thing to happen on the site) and the type of testing is covering
multiple similar scenarios so base test classes are utilised to reduce code
duplication and increase maintainability by way of using common base class
methods. The tests are organised into separate projects based on the dev
team ownership of that area (so we do use separate assemblies, but there
are many tests per assembly and my test wrapper will handle execution of
all tests in all passed in assemblies in parallel as it was created before
the first official release of NUnit 3 and I wanted to ensure all tests had
the potential for being executed simultaneously given enough threads
available).

Essentially, you can imagine that one assembly might have 10 tests and
another 100 tests, but the assembly with 10 has tests taking 10 minutes
each whereas the assembly with 100 has tests taking 1 minute each. If I
have a machine capable of running 110 parallel threads (which is actually
part of our infrastructure design) then I would expect the tests to finish
in 10 minutes, not in 100 minutes.

Hopefully this helps explain... sorry if things still aren't totally clear,
but the overall point is that method level parallelism is something I've
seen missing in other test libraries (prspec in ruby for example) and when
doing analysis of the performance improvements to be had by adding it i
found it can be a major positive change. See the following as an example:
https://github.com/bicarbon8/rspec-parallel/wiki/Examples
On 1 Jul 2016 00:45, "CharliePoole" [email protected] wrote:

Thanks for your comments. I think I'll ramble on a bit myself here...

It helps to understand what people need and why they need it. Not being a
web developer myself, I tried to listen to Selenium users and focus on what
they wanted. What that seemed to be was parallel fixtures with ordering
among the test cases. I fully grasp that different people will want
different things, but when both groups present themselves as expressing
"what Selenium users need" it's a bit confusing. Can you outline for me
what sorts of web tests might want method-level parallelism as opposed to
those folks who want to run methods in a pre-defined order? I think I could
guess, but I'd rather have it from a user.

Regarding your workaround... did you consider using separate test
assemblies? NUnit would be glad to run each one in parallel in a separate
process. Of course that doesn't eliminate the need to synchronize access to
things like files.

Regarding thread-safety - we never planned for our parallel implementation
to take responsibility for thread-safety. The only thing I was expecting to
guarantee is that NUnit won't _create_ any thread-safety problems if your
methods are already thread-safe. That in itself turns out to be non-trivial.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/nunit/nunit/issues/164#issuecomment-229819324, or mute
the thread
https://github.com/notifications/unsubscribe/ACNsyoczCUV9L7kbF2SHB7lLsyBKlrv9ks5qRFUJgaJpZM4CUZ8r
.

Thanks for the info... it's helpful to see a variety of perspectives.

My 5 cents.

In our project we have ~25 test assemblies and parallel test execution of tests per assembly and per fixture already greatly reduce execution time. Parallel execution of tests within fixture will improve it even more. Now we even split most time-consuming fixtures into several fixtures to speed up execution, but it will be much better if nunit will run fixture tests in parallel.

What I really want to see in nunit is a test parallelization on test case level, beause splitting such tests into fixtures is not always convenient and lead to code duplication. We have several tests that could take up to several thousands test-cases (each case execution is fast but in total it takes minutes) or just dozens of cases (but each case is slow) and for now such tests are main show-stoppers for us.

Thanks for the input... it's an area we are going to address in the next release along with the competing but complementary issue of making sure that some tests don't run in parallel with one another!

My earlier question was more about how people are writing tests, particularly when driving a web application. Some folks insist they want each method to be a sequenced step and the fixture to represent a series of such steps. This is of course non-parallel by definition. Other people appear to write very similar tests without sequencing the tests. My guess is that the latter group write bigger test methods, with a series of actions and asserts in each one.

@CharliePoole Great news! Thank you so much!

In my opinion test cases should be isolated and to not depend on other test cases or the execution order. If people still want to run some tests single threaded they could still use [Parallelizable(ParallelScope.None)] and/or OrderAttribute.

This issue being implemented will allow to run test suites faster versus a local webdriver grid or a Saas provider such as BrowserStack or SauceLabs. Currently I try to limit amount of test methods on a test class to the number of available webdriver nodes or browser sessions, so the grid is fully used. Otherwise if a test has 8 test methods and my grid has 8 browser sessions available only one will be used because all test methods of the class will run one at a time.

@pablomxnl Absolutely agree. As a coach or project lead, I have always pushed that idea very hard. However, I'm not wearing either of those hats here. I'm a software provider with users who ask for things. If they ask for truly egregious stuff, I just say no. But if they ask for things that are reasonable in some context, but not others - even not most - I give it consideration.

In the case of web sites, we are almost always not talking about unit tests. Higher level functional tests often require sequencing. Whether it is done via a set of steps in the test or a series of methods is an implementation detail for us and a matter of convenience for the user. Personally, I think we probably need something called a [Step] that sits inside a [StepFixture] class, or some similar names, so we can get all this ordering/sequencing stuff moved away from unit testing. Maybe I'll have time to do that before the web dies. :-)

Anyway, it's always interesting to learn how people actually use your software.

@CharliePoole, please don't get me wrong by asking this (just want to be sure): Will this feature be added to the next version (3.5, due by September 29, 2016)?

While I could pretend to predict the future, that would only satisfy you until September. :-)

Let me answer by explaining how we "schedule" features.

We are currently trying (since start of year) an approach that has releases coming out once per quarter. As we get better at releasing, we may increase the pace. Eventually, we may be able to do continuous deployment.

In a volunteer Open Source project, there is no fixed amount of effort availble per month. We can use the "yesterday's weather" approach, but it turns out that the amount of time people have to spend on NUnit, as well as the number of people volunteering, is quite variable.

As a compromise, we select a small number of issues that are key to a release and add them to the milestone in advance. My preference is to add no more than about 25% of what we may hope to get done. The majority of issues in a release are only added to that milestone after they are done or at best when somebody commits to work on them. You generally will not find open issues in our 3.5 milestone without somebody assigned to them, although I do it occasionally if it looks like something is blocking other work.

So, the positive side of what we do is that we can virtually guarantee that the release will come out on time. The negative side is that we can't tell you what will be in it. :-(

For this particular issue... I've given it high priority in order to tell our developers that it's important. Somebody has to be free to work on it and have enough background for a project of this scope and difficulty. If somebody with the right background picks this up in the next few weeks, I would guess that it can be done by the release. As a guideline, I think it would take me about a month, while working on some other things as well, to work my way through this.

Sorry there's no more definitive answer but such an answer would necessarily be a lie!

@CharliePoole is there any update considering this feature?

@tomersss we haven't started work on this yet and 3.5 will hopefully be out in a few weeks, so it will unlikely be in the next release. We would like to see this feature added soon, but we have been fairly busy this week reorganizing our repositories and code base. It is marked high priority though, so it is on our radar.

@CharliePoole is there any update to this issue?

@KPKA we haven't started work on this yet, so there is no update. For users with ZenHub installed, you will see this issue move from Backlog to ToDo then In Progress as we start working on it.

For non-users of Zenhub, you will be able to tell when somebody picks it up by watching who is assigned to the issue.

@CharliePoole @rprouse

Hi!

Is there any news according to this issue?
Whether it will be in the next release or if it is planned in one of the upcoming milestones?

At the moment this is the only issue that is preventing us from switching to NUnit.
Would be kind if someone could confess himself to get to work on this in the near future since there are many people affected I guess.

Hope to hear from you soon

@GitSIPA what test framework are you using that allows you to run test methods in parallel?

To answer you question, nobody has started working on this issue, so there is no ETA, but we will take your vote into account as we are deciding on what to work on next.

+1, Agree with @GitSIPA. This is really important functionality

@GitSIPA +1
@rprouse We are using mbunit, but this framework isn't supporting now. And we have some issues with mbunit.

@rprouse We are also using MbUnit at the moment, but since the support is not given for the new Visual Studio versions, we considered switching to NUnit.

On issue #1921 @jnm2 asked "What stands between us and having parameterized test cases run in parallel? Can I contribute?"

Second question first: For sure! We'd love your help.

And to the first question:

  1. We execute "WorkItems", which currently map one-to-one to tests. I think we need to treat OneTimeSetUp and OneTimeTearDown for a fixture as separate WorkItems as a preliminary step. Issue #1096 deals with this.

  2. We need to have some way to make a WorkItem depend on other WorkItems, so that it is only scheduled to execute when all those depended on items are complete. We currently do that in a very simplistic way using a CountDown but we need something more general. The first use of this, of course, would be to have OneTimeTearDown depend on completion of all the tests. It would have other uses later, when we implement dependency among tests. I picture this as a queue of pending workitems, but there may be other ways to implement it.

  3. With those two things out of the way, we could start working on the main issue itself: scheduling tests to run rather than running them on the same thread that did the OneTimeSetUp for the fixture.

If you'd like to work on this, you'll have to get very familiar with some of the internals of how NUnit actually dispatches tests for execution. I'll be glad to help you do that.

What planning has been done so far?
My needs are to run enough TestCaseSource cases in parallel to saturate the CPU. I have no setup, teardown or ordering requirements. Other people will need those to be taken into account though.

I would expect TestCaseSource execution to be parallelized, so two methods in the same fixture that used different TestCaseSources would execute the enumerations in parallel. Also seems it would be nice if two methods used the same TestCaseSource if it was only executed once- thoughts on this?

Edit: commenting RACE CONDITION. We're off to a good start already! 😆

I'll repeat a story I've been having to tell a lot lately. :smile:

NUnit first loads tests (aka discovers, explores) and then executes them one or more times, depending on the type of runner.

Your TestCaseSource is used in loading tests, not in running them, and we don't even consider it as part of your test - although you may have it in the same class as your test.

IOW, your test cases don't "use" your testcase source, rather NUnit uses the source to create the test cases. Of course, they can't do anything until created. Make sense?

As it happens, the creation of tests from the source is done sequentially. If you have some code being executed in the source that makes that a problem, my first guess would be that you are doing too much in the test case source. For example, you don't want to create instances of classes in your test case source, rather you want to store parameters that will be used to create them. Unfortunately, this could be a whole book chapter. :smile:

So, what this issue is about is __running__ test cases in parallel. That means all test cases under a fixture, unless you marked some as non-parallelizable. Simple, non-parameterized tests as well as individual cases of a parameterized test would be treated equally in this.

It would be an interesting experiment to allow parallelization of tests in fixtures where there is no OneTimeTearDown (I think OneTimeSetUp is OK). It might work easily if you could correctly make that determination. The determination is harder than you would think, however, because OneTimeTearDown can include both inherited methods and global ActionAttributes.

@jnm2 Did you already have the chance to tackle this specific issue?

@julianhaslinger No. It's on my list after my two more immediate issues, 1885 and 1933. If you want to tackle it, all the better!

@julianhaslinger Since we don't know one another, please excuse me for saying that this is a tough one to take on. Make sure you dig into how things are being done now and why - the latter is not always obvious I'm afraid. See my comment at https://github.com/nunit/nunit/issues/164#issuecomment-265267804 for a breakdown into separate PRs I'd like to see in order to fix this. If you come up with a simpler approach, post about it before coding it, so we can weigh future considerations against the obvious YAGNI involved in predicting the future.

@CharliePoole @jnm2 Hi guys!

It's not that I wanted to start implementing this missing feature, but merely to know about its progress. As I can tell from your answers (above), there has been no progress (?) regarding this particular feature request.

@CharliePoole One of the potential issues I came across while researching the code was how Fixtures are handled right now. Currently all WorkItems have access to the same Fixture instance, which means that if the WorkItems were to be executed in parallel, they would be accessing a single instance of the test class. This is problematic because of the race conditions it creates when tests rely on Setup or Teardown to set instance level fields.

One solution to this would be to have each WorkItem operate on a new instance of the test class, but this will likely complicate how Fixture setups behave since there would no longer a single instance of the fixture

@chris-smith-zocdoc Yes, that's the biggest difference between NUnit and it's predecessor junit. Back in the day, it was widely discussed pro and con but doesn't come up much now. For that reason, NUnit tests were supposed to be stateless, with all initialization performed in the SetUp method.

Once TestFixtureSetUp (now called OneTimeSetUp) was introduced, it became possible to do more expensive initialization once and have the tests all operate on the same initialized values. This is most useful when creating instances of the classes being tested, which may themselves be stateful. In recent years, more and more users have been availing themselves of this capability.

The subject of adding an option to create a separate instance of the user fixture per test case comes up periodically. So far, it has not made it to the stage of being a planned feature. It is definitely related to the usability of a parallel method feature on the part of users but I think it's orthogonal to the implementation. IF we were creating a new fixture per instance, we would simply run "one" time setup more than once. Of course, that would hit hard any users who are making use of statics. 😢

We postponed parallel test methods for two reasons: (1) it's hard to implement and (2) it seemed that parallel fixtures, while easier for us, is sufficient for most users. Very few users seem to share state across multiple fixtures while it appears (based on online discussions) many users share state across test methods. So far, lots of users are taking advantage of what parallelism there is and while some want method parallelism quite badly, they seem to be a relatively small group.

Nevertheless, I guess it's time to do something. I'm going to take this on for the first quarter, at least to the extent of running some experiments and possibly coming up with a sensible breakdown of tasks that others can follow up on.

IF we were creating a new fixture per instance, we would simply run "one" time setup more than once. Of course, that would hit hard any users who are making use of statics.

I think it's reasonable to require that there is no static state if you opt to run in parallel, to the extent that it's your fault if something breaks for mixing static state and parallelism. Luckily all your code will be under test so it should be hard to miss 😆

Nevertheless, I guess it's time to do something. I'm going to take this on for the first quarter, at least to the extent of running some experiments and possibly coming up with a sensible breakdown of tasks that others can follow up on.

Count me in for that.

Me too!

On Sat, 14 Jan 2017 at 3:58 AM, Joseph Musser notifications@github.com
wrote:

>
>
>
>

[image: Boxbe] https://www.boxbe.com/overview

Automatic Cleanup: keep last 1 emails ([email protected])

Edit rule
https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3Fkey%3Dwa43L7bW5j4V5ymh5QXh%252FNXk%252F0KVSSFJSqwHw2yu4No%253D%26token%3DjkWoQ96YyOE%252FHA7PvZA8g%252FOAgpOQMZdIE7ophiWCxqx1y6zZCFJ%252FKSZ2WZELsWD3YQoDEdjwb%252BWo6wGi4xiUEkGngbggedv8iYBxU7zk%252FJamyOuVtPzie4dhQJXQkQQ%252BtolspNKBzqBxtH6H%252FhqnTQ%253D%253D&tc_serial=28420100882&tc_rand=128339648&utm_source=stf&utm_medium=email&utm_campaign=ANNO_CLEANUP_EDIT&utm_content=001

| Delete rule
https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3Fkey%3Dwa43L7bW5j4V5ymh5QXh%252FNXk%252F0KVSSFJSqwHw2yu4No%253D%26token%3DjkWoQ96YyOE%252FHA7PvZA8g%252FOAgpOQMZdIE7ophiWCxqx1y6zZCFJ%252FKSZ2WZELsWD3YQoDEdjwb%252BWo6wGi4xiUEkGngbggedv8iYBxU7zk%252FJamyOuVtPzie4dhQJXQkQQ%252BtolspNKBzqBxtH6H%252FhqnTQ%253D%253D&tc_serial=28420100882&tc_rand=128339648&utm_source=stf&utm_medium=email&utm_campaign=ANNO_CLEANUP_EDIT&utm_content=001

| Mark important
https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3Fkey%3Dwa43L7bW5j4V5ymh5QXh%252FNXk%252F0KVSSFJSqwHw2yu4No%253D%26token%3DjkWoQ96YyOE%252FHA7PvZA8g%252FOAgpOQMZdIE7ophiWCxqx1y6zZCFJ%252FKSZ2WZELsWD3YQoDEdjwb%252BWo6wGi4xiUEkGngbggedv8iYBxU7zk%252FJamyOuVtPzie4dhQJXQkQQ%252BtolspNKBzqBxtH6H%252FhqnTQ%253D%253D%26important%3Dtrue%26emlId%3D54854949581&tc_serial=28420100882&tc_rand=128339648&utm_source=stf&utm_medium=email&utm_campaign=ANNO_CLEANUP_EDIT&utm_content=001

IF we were creating a new fixture per instance, we would simply run "one"
time setup more than once. Of course, that would hit hard any users who are
making use of statics.

I think it's reasonable to require that there is no static state if you
opt to run in parallel, to the extent that it's your fault if something
breaks for mixing static state and parallelism. Luckily all this code will
be under test so it should be hard to miss :D

Nevertheless, I guess it's time to do something. I'm going to take this on
for the first quarter, at least to the extent of running some experiments
and possibly coming up with a sensible breakdown of tasks that others can
follow up on.

Count me in for that.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/nunit/nunit/issues/164#issuecomment-272488655, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAHNVlwVps8pQ93UIwZw6zY7TOyHO0a6ks5rR61EgaJpZM4CUZ8r
.

Running some timing tests, I discovered something interesting. There is an ambiguity in calling test cases parallelizable. This may also apply to fixtures and other suites, but it's less obvious.

Here's the thing: if a method is completely non-parallelizable, that means it cannot run in parallel with __any other method__. OTOH, if it's only non-parallelizable within it's fixture then it can't run in parallel with __any other method in the same fixture.__

I propose we handle this distinction as follows:

  1. If a test method has no Parallelizable attribute, it should run on the same thread as its containing fixture, provided no other attribute (e.g. Apartment) calls for it to be run on a different thread

  2. If it has a Parallelizable attribute with a scope of Self, or if a higher level suite specifies a scope of Children, then the test method is run in parallel with the other methods in the fixture.

Thoughts @nunit/core-team ?

Left out above:

  1. If it has a Parallelizable attribute with a scope of None, then it runs on the non-parallel queue.

@CharliePoole - your three points make sense, although I'm not clear on what the alternative would be that you're suggesting we don't do. Could you clarify?

We currently treat not having an attribute the same as having one with ParallelScope.None. That gets them queued on the non-parallel queue if I enable dispatching them and the test run slows significantly.

PR #2011 takes some initial steps toward implementing this feature but contains a #define that forces test cases to run on the thread of their containing fixture. This comment documents what I think needs to be done in order to remove that limitation.

Currently, a fixture's OneTimeTearDown runs on the thread that was used to finish the last test case executed. If test cases run in parallel, we can't predict which test case that will be. It might be a thread with different characteristics from those required for the fixture. For example, if the last test to finish is running in the STA, that's what will be used to run the OneTimeTearDown, even if the OneTimeSetUp ran in an MTA. In many cases, that might not cause any problem but in some it could.

This might actually be a problem with higher-level fixtures as well, but the introduction of parallel test cases means that there will be many more opportunities for a mismatch that causes an error.

So, in order to ensure that fixture teardown takes place on the right sort of thread, we can no longer run it on the thread of the last test to execute. Instead, the dispatcher will need to maintain information about all pending teardowns and be notified to run those teardowns on the proper thread at the proper time. Figuring out how to do that is the "design" phase of this feature.

for anyone who's been following this thread since the beginning (back in 2014) and doesn't want to or is unable to implement their own workaround while awaiting this feature addition, I've just stumbled across an implementation using NUnit 2.6.3 available on CodePlex that seems pretty straightforward to use (I've verified it works in running our functional tests in multiple parallel threads).

http://cpntr.codeplex.com/

apologies in advance @CharliePoole if this message is a bit orthogonal to the recent discussions on this thread, but if anyone else has been waiting the past 3 years for this feature addition based on the promises set forward for NUnit 3 (way back in the early days), I think it might offer a solution until you guys manage to work out the design issues (sounds like you're closing in on the solution).

@CharliePoole

So, in order to ensure that fixture teardown takes place on the right sort of thread, we can no longer run it on the thread of the last test to execute. Instead, the dispatcher will need to maintain information about all pending teardowns and be notified to run those teardowns on the proper thread at the proper time. Figuring out how to do that is the "design" phase of this feature.

Do we need the dispatcher to know of the pending teardown items upfront, or can we dispatch them as they become available? More concretely, where CompositeWorkItem currently calls PerformOneTimeTearDown can we use the dispatcher to dispatch a new unit of work onto the correct work shift?

@chris-smith-zocdoc Yes, that's exactly what I'm doing. I created a new type of work item, OneTimeTearDownWorkItem, which is nested in CompositeWorkItem and is dispatched when the last child test is run. Later on, we might look at some efficiencies when the OneTimeSetUp and all the tests have been run on the same thread.

Haven't read everything too carefully, but one thing I'd like to request as part of this feature is that the parallelism be "smart", especially for I/O bound tests. For example, if you have 10 threads executing 100 parallelizable tests, it shouldn't be the case that the 10 threads sit and wait for the first 10 tests to complete before moving on to the next 10 tests. If the first 10 tests start awaiting very long-running I/O tasks, then the threads should be free to move on to other tests. When the I/O tasks complete, threads will resume the awaiting tests as threads free up.

Basically, I'm asking for smart throughput management for I/O bound tests that make extensive use of async/await. This is our number one bottleneck in tests, by far.

@chris-smith-zocdoc In fact, that's pretty much what I'm doing. I'm essentially using the existing countdown mechanism to trigger dispatch of a one time teardown task. The trick is to get it dispatched on the proper queue.

@gzak Bear in mind that the mechanism for parallel execution already exists. It depends on workers independently pulling tasks rather than having a controller that pushes tasks to workers. So if one worker is busy with a task for a while, other workers will continue to execute other tasks independently. The trick is to set the number of workers based on the nature of the tests being run. NUnit does fairly well by default with normal, compute-bound unit tests but other sorts of tests may require the user setting an appropriate level of parallelism.

Can someone explain me how does it work?
I have a test class
When I run the tests in this class NOT in parallel - all tests passed
But when I run them with [Parallelizable(ParallelScope.Children)]
So they do run in parallel (multiple methods in the same class)
but for some reason some tests are failing.
I have instance fields in this class that are used across the tests and it seems that those fields are shared between threads
Am I right?
Do you create only 1 instance of that class and call the methods concurrently on this single object?
CharliePoole

You figured it out! Yes, all tests in a fixture use the same instance. This is historical with NUnit, which has always worked that way. You must choose between running the test cases in parallel and having any state that is modified per test. There is no way around it currently.

That said, if you have a decent proportion of fixtures, simply running fixtures in parallel can give you good performance.

Is is possible to have the [Parallelizable(ParallelScope.Children)] in the
AssemblyInfo.cs file?

Has anyone seen that working?

On 14 June 2017 at 01:37, CharliePoole notifications@github.com wrote:

[image: Boxbe] https://www.boxbe.com/overview Automatic Cleanup: keep
last 1 emails ([email protected]) Edit rule
https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3Fkey%3DCCP9ir0TebdbdQOWVtGMQyr2TETYOrzfkbItPzUTgDk%253D%26token%3D0PvYtf1G2B%252FJ2FNN3b7MTSmP1zvs6x3Cu8z6LaAB8%252BJm73uy8ZNUCnQcknlgWKxRQ5zjE%252BY30Xkv1W1Gue9gGnpyi72YUTaHP2h6wvuEpTXe1WoQDoSHpUGDQefgQu6LDH0rRhsEvF%252FW%252BhysbMtsDw%253D%253D&tc_serial=30872123699&tc_rand=2087335475&utm_source=stf&utm_medium=email&utm_campaign=ANNO_CLEANUP_EDIT&utm_content=001
| Delete rule
https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3Fkey%3DCCP9ir0TebdbdQOWVtGMQyr2TETYOrzfkbItPzUTgDk%253D%26token%3D0PvYtf1G2B%252FJ2FNN3b7MTSmP1zvs6x3Cu8z6LaAB8%252BJm73uy8ZNUCnQcknlgWKxRQ5zjE%252BY30Xkv1W1Gue9gGnpyi72YUTaHP2h6wvuEpTXe1WoQDoSHpUGDQefgQu6LDH0rRhsEvF%252FW%252BhysbMtsDw%253D%253D&tc_serial=30872123699&tc_rand=2087335475&utm_source=stf&utm_medium=email&utm_campaign=ANNO_CLEANUP_EDIT&utm_content=001
| Mark important
https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3Fkey%3DCCP9ir0TebdbdQOWVtGMQyr2TETYOrzfkbItPzUTgDk%253D%26token%3D0PvYtf1G2B%252FJ2FNN3b7MTSmP1zvs6x3Cu8z6LaAB8%252BJm73uy8ZNUCnQcknlgWKxRQ5zjE%252BY30Xkv1W1Gue9gGnpyi72YUTaHP2h6wvuEpTXe1WoQDoSHpUGDQefgQu6LDH0rRhsEvF%252FW%252BhysbMtsDw%253D%253D%26important%3Dtrue%26emlId%3D61187554820&tc_serial=30872123699&tc_rand=2087335475&utm_source=stf&utm_medium=email&utm_campaign=ANNO_CLEANUP_EDIT&utm_content=001

You figured it out! Yes, all tests in a fixture use the same instance.
This is historical with NUnit, which has always worked that way. You must
choose between running the test cases in parallel and having any state that
is modified per test. There is no way around it currently.

That said, if you have a decent proportion of fixtures, simply running
fixtures in parallel can give you good performance.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/nunit/nunit/issues/164#issuecomment-308157832, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAHNVuK-J-X74mlAA_eT7OX7dxs7MpoRks5sDqzLgaJpZM4CUZ8r
.

@agray Yes, in fact that's the only reason I have an AssemblyInfo.cs now.

Hi Joseph,

What parallelism line are you adding to your AssemblyInfo.cs file?

I'd love to know what works.

Cheers,

Andrew

On Wed, 14 Jun 2017 at 4:17 PM, Joseph Musser notifications@github.com
wrote:

[image: Boxbe] https://www.boxbe.com/overview Automatic Cleanup: keep
last 1 emails ([email protected]) Edit rule
https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3Fkey%3DnRGYOf%252FKR68McP8RTipuFTLF9WUvAkSKSh6OYJYvJ2w%253D%26token%3DDcwWzMT9yPtQLckdXD3BLhW5xAB%252BMpc9XdsrbBlwdQJZC%252Bjo3SjsyMVC8vese%252BCNRw2IyucWqEcHR5Is7CK7nx01VuSQESvjG4ZUj%252B2Cfcwg51yUFFQJSwrmTVxqAk4%252BTVGDSrnutdAuqiJ3kTaYQA%253D%253D&tc_serial=30882364582&tc_rand=1974513896&utm_source=stf&utm_medium=email&utm_campaign=ANNO_CLEANUP_EDIT&utm_content=001
| Delete rule
https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3Fkey%3DnRGYOf%252FKR68McP8RTipuFTLF9WUvAkSKSh6OYJYvJ2w%253D%26token%3DDcwWzMT9yPtQLckdXD3BLhW5xAB%252BMpc9XdsrbBlwdQJZC%252Bjo3SjsyMVC8vese%252BCNRw2IyucWqEcHR5Is7CK7nx01VuSQESvjG4ZUj%252B2Cfcwg51yUFFQJSwrmTVxqAk4%252BTVGDSrnutdAuqiJ3kTaYQA%253D%253D&tc_serial=30882364582&tc_rand=1974513896&utm_source=stf&utm_medium=email&utm_campaign=ANNO_CLEANUP_EDIT&utm_content=001
| Mark important
https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3Fkey%3DnRGYOf%252FKR68McP8RTipuFTLF9WUvAkSKSh6OYJYvJ2w%253D%26token%3DDcwWzMT9yPtQLckdXD3BLhW5xAB%252BMpc9XdsrbBlwdQJZC%252Bjo3SjsyMVC8vese%252BCNRw2IyucWqEcHR5Is7CK7nx01VuSQESvjG4ZUj%252B2Cfcwg51yUFFQJSwrmTVxqAk4%252BTVGDSrnutdAuqiJ3kTaYQA%253D%253D%26important%3Dtrue%26emlId%3D61214070592&tc_serial=30882364582&tc_rand=1974513896&utm_source=stf&utm_medium=email&utm_campaign=ANNO_CLEANUP_EDIT&utm_content=001

@array https://github.com/array Yes, in fact that's the only reason I
have an AssemblyInfo.cs now.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/nunit/nunit/issues/164#issuecomment-308325726, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAHNVghNAX8CJkqAy-qhAz8bXNhZcFOQks5sD3NigaJpZM4CUZ8r
.

@agray The one you asked about:

c# [Parallelizable(ParallelScope.Children)]

don't forget target

[assembly: Parallelizable(ParallelScope.Children)]

@LirazShay I use NUnit to drive Selenium tests, and was using fixture-level fields to hold things like references to user accounts and to the Selenium WebDriver instance that I was working with and so was unable to run tests in parallel in-fixture. The way I worked around that was to write a "factory" (I use quotes because I'm not sure it's the right term) that implements IDisposable that for each test encapsulates all my test needs and cleanly tears them down at the end of the test with no need for [TearDown] or [OneTimeTearDown] kind of like so:

 public class TestFactory : IDisposable
    {
    // Instantiate a new SafeHandle instance.
    private readonly System.Runtime.InteropServices.SafeHandle handle = new Microsoft.Win32.SafeHandles.SafeFileHandle(IntPtr.Zero, true);

    // Flag: Has Disposed been called?
    private bool disposed = false;

    public TestFactory()
    {
        this.UserRepository = new List<UserAccount>();
        this.DU = new DataUtility();
    }

    // A list of users created for this test
    public List<UserAccount> UserRepository { get; private set; }

    // A very simple data layer utility that uses Dapper to interact with the database in my application 
    public DataUtility DU { get; private set; }

    // Gets a new user and adds it to the repository
    public UserAccount GetNewUser()
    {
        var ua = new UserAccount();
        this.UserRepository.Add(ua);
        return ua;
    }


    public void Dispose()
    {
        this.Dispose(true);
        GC.SuppressFinalize(this);
    }

    protected virtual void Dispose(bool disposing)
    {
        if (this.disposed)
        {
            return;
        }

        if (disposing)
        {
            // Deletes all user accounts created during the test
            foreach (UserAccount ua in this.UserRepository)
            {
                try
                {
                    ua.Delete();
                }
                catch (Exception)
                {
                    // Take no action if delete fails.
                }
            }

            this.DU.DeleteNullLoginFailures(); // Cleans up the database after tests
            Thread.Sleep(1500);
        }

        this.disposed = true;
    }
}

Then, within a test I can do this:

[TestFixture]
public class UserConfigureTests
{
    [Test]
    public void MyExampleTest()
    {
        using (TestFactory tf = new TestFactory())
        {
            var testUser = tf.GetNewUser();

    tf.DU.DoSomethingInTheDatabase(myParameter);

    // Test actions go here, and when we exit this using block the TestFactory cleans
    // up after itself using the Dispose method which calls whatever cleanup logic you've written into it
        }
    }
}

This way, I can avoid a lot of code duplication, and if I ever need to change the dependencies of my test I just do it once in the factory. If anyone has feedback on the strategy I took I'd appreciate it!

@tparikka I highly recommend exactly that approach myself.

I have the following in my AssemblyInfo file:

[assembly: Parallelizable(ParallelScope.Children)]
[assembly: LevelOfParallelism(16)]

Do I need the LevelOfParallelism attribute as well anymore?

On 15 June 2017 at 04:37, Joseph Musser notifications@github.com wrote:

[image: Boxbe] https://www.boxbe.com/overview Automatic Cleanup: keep
last 1 emails ([email protected]) Edit rule
https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3Fkey%3Doth%252FFbAPG%252FhDbtUlS0i2PDDdQ0lP4dBi8o7QQmkLEMQ%253D%26token%3DHe7HfTrm%252Fiba3yIDazaBx8JO9NrmoL8p5TWtDB1hUxq0qp%252BhCRCB3OSl7sUQ6wDshc2I5LQhbR2jszLWr6FnRy%252FZUrCqghZIIilbDWKszT7pkI44xp9vaL6cszH9Sgg1YPCAHdVWqccZYxYXGW174A%253D%253D&tc_serial=30894750852&tc_rand=1847060911&utm_source=stf&utm_medium=email&utm_campaign=ANNO_CLEANUP_EDIT&utm_content=001
| Delete rule
https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3Fkey%3Doth%252FFbAPG%252FhDbtUlS0i2PDDdQ0lP4dBi8o7QQmkLEMQ%253D%26token%3DHe7HfTrm%252Fiba3yIDazaBx8JO9NrmoL8p5TWtDB1hUxq0qp%252BhCRCB3OSl7sUQ6wDshc2I5LQhbR2jszLWr6FnRy%252FZUrCqghZIIilbDWKszT7pkI44xp9vaL6cszH9Sgg1YPCAHdVWqccZYxYXGW174A%253D%253D&tc_serial=30894750852&tc_rand=1847060911&utm_source=stf&utm_medium=email&utm_campaign=ANNO_CLEANUP_EDIT&utm_content=001
| Mark important
https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3Fkey%3Doth%252FFbAPG%252FhDbtUlS0i2PDDdQ0lP4dBi8o7QQmkLEMQ%253D%26token%3DHe7HfTrm%252Fiba3yIDazaBx8JO9NrmoL8p5TWtDB1hUxq0qp%252BhCRCB3OSl7sUQ6wDshc2I5LQhbR2jszLWr6FnRy%252FZUrCqghZIIilbDWKszT7pkI44xp9vaL6cszH9Sgg1YPCAHdVWqccZYxYXGW174A%253D%253D%26important%3Dtrue%26emlId%3D61245244440&tc_serial=30894750852&tc_rand=1847060911&utm_source=stf&utm_medium=email&utm_campaign=ANNO_CLEANUP_EDIT&utm_content=001

@tparikka https://github.com/tparikka I highly recommend exactly that
approach myself.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/nunit/nunit/issues/164#issuecomment-308521058, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAHNVmEzVDbw32Xx0jkYcGK9bZW3tLvXks5sECh8gaJpZM4CUZ8r
.

I haven't looked into using LevelOfParallelism yet. It defaults to the number of cores you have.

If your tests are not CPU-bound, higher makes sense. But as always with perf- the answer is so dependent on your scenario that it's better to measure rather than guess.

@CharliePoole I'm using TestcaseSource, but it looks like the resulting testcases aren't actually being executed in parallel. Is something like this expected to work:

```c#
[TestFixture]
class Deserialization
{
public static IEnumerable ShouldDeserializeAllCases() => Enumerable.Repeat(0, 5).Select(x => TimeSpan.FromSeconds(2));

    [TestCaseSource("ShouldDeserializeAllCases"), Parallelizable(ParallelScope.Children)]
    public void ShouldDeserializeAll(TimeSpan t)
    {
        Thread.Sleep(t);
        Assert.AreEqual(1, 1);
    }
}

```

The overall time taken is 10 seconds instead of ~2.

I'll think that there are no children, so in this case, you could better use
[Parallelizable(ParallelScope.All)]
or move your attribute on class level.

@ParanoikCZE Thanks. I'm actually flying blind with respect to what that attribute means, so I've tried all enum values on there. Regardless of which of All, Children, Fixture or Self I use, I get a 10 second execution time (at least in Visual Studio).

I just tried moving it to the class, but this doesn't seem to help either.

Try this is source of inspiration :)

class Deserialization
{
    public static IEnumerable<TestCaseData> ShouldDeserializeAllCases
    {
        get
        {
            for (int i = 1; i <= 5; i++)
                yield return new TestCaseData(TimeSpan.FromSeconds(i)).SetName($"Thread_worker_{i}");
        }
    }

    [TestCaseSource(nameof(ShouldDeserializeAllCases)), Parallelizable(ParallelScope.Children)]
    public void ShouldDeserializeAll(TimeSpan t) => System.Threading.Thread.Sleep(t);
}

@ParanoikCZE Thanks again. I tested this out in Visual Studio and the visualization is much clearer, but the tests are still running sequentially. Easier to see this if you use a constant sleep for each testcase instead of increasing steps.

Try adding [assembly: LevelOfParallelism(5)] into AssemblyInfo, I think there is some default value, but maybe it doesn't work for you somehow. Anyway, I'm out of ideas. :)

Was this page helpful?
0 / 5 - 0 ratings

Related issues

DustinKingen picture DustinKingen  ·  4Comments

Prodigio picture Prodigio  ·  3Comments

dgm90 picture dgm90  ·  5Comments

jnm2 picture jnm2  ·  4Comments

Thaina picture Thaina  ·  5Comments