Thanks for the edits! I've done all but packing all testing code within the call to test_that. It seems cleaner to have test setup separate from the actual tests so that what test_that is testing is clear. Happy to try everything in test_that if that's accepted practice here.
Might as well add some multiple dispatch. Julia people, is this the right way to handle these inputs or are conditionals more "julian"?
This comment is hidden because it contains spoiler information about the solution
When there is a flaw you get for example:
`actual` not equal to `expected`.
2/3 mismatches (average diff: 144)
 0 - 1 == -1
 32 - 320 == -288
Here a 0 and 32 are returned instead of (a fake) 1 and 320.
Random tests in R for this one don't show the expected values for the failed tests. This made it hard to see that my solution was failing for cases > 24h.
Replacing expect_equal(actual, expected) with show_failure(expect_equal(!!actual, !!expected)) will show the expectations for the random test cases.
Is there a way to suggest changes to Kata/tests with a pull request?