Loading collection data...
Collections are a way for you to organize kata so that you can create your own training routines. Every collection you create is public and automatically sharable with other warriors. After you have added a few kata to a collection you and others can train on the kata contained within the collection.
Get started now by creating a new collection.
It is not documented so it is difficult to find any information about the current rank standards. You may read this discussion about problems with the current Beta process.
There is also an issue about reranking some existing kata in theorem proving languages. It is educational to see how fast ranks can inflate: theorem proving languages were added in March-April 2019 and now many approved kata from that time are considered to be severely overranked.
So you are saying that the ranking system was drastically changed, and that this has not been documented. Where can I find more information about this?
I just want to get this kata approved eventually. I expect that some warriors may downvote this kata because its estimated rank is too high. The ranking guide is quite outdated and many existing kata are overranked. Right now this kata has the average assessed rank
7 kyu. When random tests are imporved then the rank will be at least
6 kyu. If you want to get a higher rank, then I suggest to add more Forth features such as
recurse. But maybe it is better to leave this kata unchanged (random tests still need to be improved) and create another kata with more Forth features.
The ranking guide suggests that "basic interpreters and compilers" are 2 kyu. I think you're mistaken.
That kata is from 2013 and quite overranked. It would be around 5kyu if it was ranked now.
Also, this postfix calculator is 3 kyu.
I can't agree. Knight path is 4 kyu. So is "Adding big numbers".
I suggest to change the estimated rank to
Many invalidated solutions pass random tests. So random tests must be improved. You are thinking in the right direction: it is necessary to include a reference solution inside tests in order to compute the expected result of randomly generated programs.
I suggest the following approach for generating random programs. For each word (operator) it is possible to define its stack effect as a pair of numbers: the first number indicates the required stack size before calling this word, the second number shows how many values are added/removed to/from the stack after applying this word. For instance, we have |drop| = (1, -1), |swap| = (2, 0), |dup| = (1, 1), |+| = (2, -1), etc. Now it is possible to generate a random program by selecting an arbitrary sequence of words and analyzing the stack effects of this sequence. If at any point we have stack underflow, it is enough to insert several randomly generated numbers at an appropriate position. In this way, it is possible to generate arbitrary long programs. It is also possible to generate programs with definitions. Each definition can be generated with a completely random sequence of words and numbers. Then it is necessary to compute the stack effect of a new definition and use this new definition in a random program.
If I understand you correclty, that seems like a needlessly fine definition of the term "specified". I struggle to see how there is more than one sensible interpretation of the language used. I presume you have differing experience; please to relate it.
I appreciate that you're taking any of your time to assist me in correcting this problem. I find it puzzling that you feel the need to speak roughly. I accept that you may be right in principle and I presume you have greater experience with these matters. I hope that we can have a polite, or even a friendly mode of conversation.
It is the case that there are currently no random tests which test for the ability to define and redefine operators, only random tests that check whether numbers may not be redefined. It seems relevant to note that in order to generate random instructions of arbitrary complexity, it would be necessary to embed a full interpreter into the tests, in order to be able to check the validity of the output. If that's a common practice I'm sure I could implement it; generating such instructions sounds interesting. If there's a less-complex approach that you would find acceptable I'd be happy to hear it. Also, I'm not sure if it's relevant, but I don't believe that your solution currently passes the tests.
Then you have to say "the exception type does not matter". If you say "exception type is not specified", you only explicitly state that you're not specifying the requirements correctly.
THERE'RE NO RANDOM TESTS WITH FUNCTIONS DEFINED DURING RUNTIME. Is it clearer now, or should I repeat it the third time?
"bad" is not very useful or descriptive. It's more of an emotional pejorative. The tests are likely to continue to be "bad" without specific, actionable feedback.
The type of the exception is not specified. You know about types in programming languages, I presume? Any exception type will pass the tests.
What is this supposed to mean?
It looks like there're no random tests with functions defined during runtime, so they're still bad.
Removed division from random testing. Issue should be resolved.
Hmm, I had hoped I'd solved that. I suppose I can remove
/from the potential choices.
Loading more items...