Ad
  • Custom User Avatar
  • Custom User Avatar

    Well no, and please notice how I said that noisy tests are an issue. It's just that I know why they are how they are (not everyone knows of, or likes, log tabs and other ways of handling bulky outputs). And when I see that a remark on a non-obvious thing does not land, I think there is no use to play back-and-forths and it will be easier just to fix things myself.

  • Custom User Avatar

    So it's good enough when everyone other than us does it? Gotcha lmao.

  • Custom User Avatar

    Oh gosh, if the next thing we are going to have now is reopening and closing the same issue over and over, then please don't, and I will think of something myself.

    The kata is much better than it was before the overhaul, so even with the issue of noisy tests, it's at least not as broken as it used to be.

  • Custom User Avatar

    Not an issue.

  • Custom User Avatar

    Python tests print to console.

  • Custom User Avatar

    Kata like this one will always be problematic, because there is no good way to present large amount of potentially large strings full of unprintable characters. Python framework does indeed make many things difficult which other frameworks make easy. But in case of this particular test suite, there are many separate issues:

    • reprs of values being inherently ugly (because binary data formatted as strings)
    • unnecessary prints for successful tests,
    • unnecessary tests running after first failure.

    We have patterns to address each of these, they give relatively nice output, but at the same time they require you to squint your eyes a bit when looking at them as "normal" patterns used for "normal" tests:

    • To get rid of ugly reprs, use some nicely formatted representation instead (or in addition), like in a hex editor, pairs/blocks of hex digits?
    • To avoid unnecessary prints, we craft our own assertion, perform our own equality check, and use pass_ on success, or fail on failure. We show small values with assertion messages, and large, bulky values with CW's log tabs.
    • To stop on first failure, we either manually track the result of comparison and bail out of the testing loop on the first mismatch, or use allow_raise and patch around the thrown exception to prevent ugly stack traces.

    Exact solution might depend on how much effort we want to put into a single kata. What I would try would be something along the lines of:

    def assert_toBase64_pretty_print(bytes, expected):
       actual = user_encode(bytes)
       if actual == expected:
         test.pass_("Test passed")
         return True
       else:
         printLogTab("Input", bytes)
         printLogTab("Input (hex)", toHex(bytes)
         printLogTab("Actual", actual)
         printLogTab("Expected", expected)
         test.fail("Test failed")
         return False
         
    def random_tests():
       do
         input_bytes = generate_random_input()
         expected = refsol(input_bytes)
      while assert_toBase64_pretty_print(input_bytes, expected)
         
    

    ... or something along the lines, you get the idea :) With a lot of additional caveats because you most probably want to handle mutation of input arrays, and encode/decode roundtrips, and all of this. So yeah, there is going to be some effort in just making tests look good.

  • Custom User Avatar

    You don't print to console. Simple as.

    It's literally just not how you're meant to write tests for cw, and produces extremely unintuitive (even sometimes nondeterministic!) results for many test frameworks due to async/parallelised test execution etc. There's a reason why every test framework under the sun comes with its own logging/output setup.

    Python tests do not run async, so that's a non-issue. I picked prints because they only show up when you actually expand the test in the UI and so don't need to clutter the whole output with too much detail. More importantly, the Python test framework has many issues, one of which is that it doesn't have more decent logging support and the assertion messages suck, so we prints are actually the better choice here. The test data itself is no more meaningful than the current numbering, so I don't see any reason to change that either.

    I'm also sticking to both refsols. Sure, we can roundtrip the random data through the encoder and so have both ends of the data, and we could cop out and use base64, but why then encourage everyone to not use the stdlib and not give a decent example of what a solution could look like once you have passed the tests?

    I've pushed a fix for the random b64 data generator.

  • Custom User Avatar

    You don't print to console. Simple as.

    It's literally just not how you're meant to write tests for cw, and produces extremely unintuitive (even sometimes nondeterministic!) results for many test frameworks due to async/parallelised test execution etc. There's a reason why every test framework under the sun comes with its own logging/output setup.

    Like just put the test data in an it title if you wanna show it to the user, it's entirely baffling that you generate a completely meaningless title and then print the actually meaningful information in it. Even better, use assertion messages like a civilised person. Why are we even showing the inputs of passed tests at all? It just adds more noise to runner output. The user only needs to care about failed inputs anyway.

    You don't need both refsols, you only need a refsol in a single direction. Generate en/decoded data (whichever is more convenient) and have a refsol that turns it into the other one. On the subject, I'd also advise just importing base64 and using that rather than rolling your own non-standard b64 implementation for the tests.

    Whilst technically non-problematic, this is still definitely a mistake:

    result = to_base_64(decoded)
    print(f"Encoding {decoded!r}")
    test.assert_equals(result, encoded)
    print(f"Decoding {encoded!r}")
    test.assert_equals(from_base_64(result), decoded)
    
  • Custom User Avatar

    thanks for your kind assessment!

    Yup, I see where I forgot a "".join() on the random base64 generator, will fix that tomorrow.

    However, you'll have to explain why printing to the console is a bad idea and what else is such a mess here.

  • Custom User Avatar
  • Custom User Avatar

    Python:

    • Tests pass lists
    • Tests print to console
    • Is, in general, an absolute mess
  • Custom User Avatar
  • Custom User Avatar

    Only where I wasn't yet familiar with the specific language :-) Swift especially took a while to hit upon a suitable idiom.

  • Custom User Avatar

    C translation (author gone)

  • Loading more items...