• Thanks for the edits! I've done all but packing all testing code within the call to test_that. It seems cleaner to have test setup separate from the actual tests so that what test_that is testing is clear. Happy to try everything in test_that if that's accepted practice here.

    • No sample tests
    • Comments should be removed
    • Random test cases should be ordered in a way that imports are above, followed by random input generation and assertions, all packed within test_that (context should be removed)
  • Might as well add some multiple dispatch. Julia people, is this the right way to handle these inputs or are conditionals more "julian"?

  • This comment is hidden because it contains spoiler information about the solution

  • When there is a flaw you get for example:

    Test Failed
    	`actual` not equal to `expected`.
    	2/3 mismatches (average diff: 144)
    	[1]  0 -   1 ==   -1
    	[2] 32 - 320 == -288
    

    Here a 0 and 32 are returned instead of (a fake) 1 and 320.

  • This comment is hidden because it contains spoiler information about the solution

  • Random tests in R for this one don't show the expected values for the failed tests. This made it hard to see that my solution was failing for cases > 24h.

    Replacing expect_equal(actual, expected) with show_failure(expect_equal(!!actual, !!expected)) will show the expectations for the random test cases.

    Is there a way to suggest changes to Kata/tests with a pull request?

  • This comment is hidden because it contains spoiler information about the solution