Ad
  • Custom User Avatar

    Appoved! Thanks

  • Custom User Avatar

    approved by someone

  • Default User Avatar
  • Default User Avatar

    Thanks for the translation! Never seen Julia code before.

  • Default User Avatar

    Right, should be fixed now. Thanks!

  • Custom User Avatar

    Thanks, approved!

  • Custom User Avatar

    Somewhat off-topic, but I really like to read detailed and insightful exchanges like this one. If you power users could make more of it, it would be really great (to me, at least).

  • Custom User Avatar

    Having unpredictable random test cases makes it harder to create good kata that work reliably, because the users will be exposed to test cases that the kata author never used or considered when developing the kata.

    I would argue that this actually helps to contribute to making better kata. Occasionally people run into situations which was never considered by the author, they raise it as an issue, it gets fixed and now the kata is better. While it is true that its not great that rarely a user will run into this kind of bug, I also think its not great to have (rare) kata which are actually incomplete due to author solutions simply not testing inputs which they fail on.

  • Default User Avatar

    My Optical Character Recognition kata had some discussion of this about 3 years ago.

    As for practical Codewars-specific downsides: I actually came across one only yesterday, while changing the R version of this kata. I removed the call to set.seed(), checked that everything still seemed to be working (with different test cases on each submission), and hit "Re-publish" - only to be told that the kata couldn't be published because my solution was failing tests. (Wait, what?) It turned out that the test code itself contained a rarely-occurring bug (arising from the way the sample() function in R behaves differently when its main argument is a vector of length 1). This hadn't been an issue before, because it didn't affect any of the 512 random test cases generated by the fixed random seed I was using. But with a different seed on each submission, you occasionally get test cases that are affected, causing a crash. It was sheer good luck that such a test case was generated by Codewars' (one and only!) final check before publication. If that hadn't happened, then some hapless codewarrior would have had to discover the problem, slowly and painfully realize that the problem wasn't in their own code, get frustrated, raise an issue on the discussion board, etc. - all the stuff that unit testing is supposed to prevent.

    Having unpredictable random test cases makes it harder to create good kata that work reliably, because the users will be exposed to test cases that the kata author never used or considered when developing the kata. Often this doesn't cause any problems, but sometimes - as I was reminded yesterday - it does.

    Another Codewars-specific problem relates to execution time. With some kata (7x7 Skyscrapers is a good example) randomly-generated test cases can vary widely in required execution time. Big-O is formally defined to be for the worst case, but what Codewars measures is more like the best case, because you can keep re-trying until you get a test set that doesn't include any hard examples. I've certainly had the experience of writing a not-really-good-enough kata solution and getting it to pass anyway by repeatedly hitting "Attempt" until it gets a problem set that it can solve within the required 12 seconds.

    There are some compromise solutions. You can have a fixed set of test cases, but present them in an unpredictable random order; this at least makes cheating a bit more difficult. Or you can minimize the number of unpredictable test cases (only a few are really needed to catch the cheaters). It would also help if the Codewars test frameworks all stopped execution after the first failed test, instead of trying all the tests and showing what all the expected answers are.

    In the end, I guess it comes down to what the purpose of Codewars is. If it's just to help people learn to program, then perhaps cheating doesn't matter too much and we should emphasize reliable, well-tested, maintainable code. But if the value of 1kyu status goes beyond bragging rights with your friends (if it's helping people get real-world jobs, for example), then it needs to be as difficult as possible to defeat.

    Anyway, thanks for your interest. If you have any further thoughts, please post them here.

  • Custom User Avatar

    Thanks for the remark! I replaced the function name with Snakecase, but didn't implement backward compatibility as I don't think it's reasonable for only three existing solutions on Julia at the moment.

  • Default User Avatar

    This is a conversation I've had several times before on Codewars. For software testing in general, unpredictable unit tests are a really bad idea - they lead to bugs that occur only sometimes, unpredictably. Such bugs can be very hard to find and fix. But in an adversarial context like Codewars, something is needed to prevent the adversary easily defeating the system. There are some compromise solutions possible, which I've experimented with in my other kata. But for a simple kata like this one, I'll concede the point and switch to using unpredictable random seeds.

  • Default User Avatar

    The random tests are generated using a fixed pseudo-random seed, so they are the same every time. This ensures that any bugs in your code will be reproducible.

  • Custom User Avatar

    I think there are still some describes without its in sample tests?

  • Custom User Avatar

    Thanks!

  • Custom User Avatar

    ref sol, random test generation should be fast, give user more time to solve -> 7 kyu

  • Loading more items...