If you have questions about any of these steps, please ask on #perl6 on irc.freenode.net
If a ticket is "new" in RT, it implies that no one has done anything with the ticket. if you can dupicate the error that was reported, please "open" the ticket and leave a comment that you can verify the issue exists.
- go to http://rakudo.org/tickets/
- click on "Tickets blocking test coverage"
- Find a ticket, extract the code to test the issue
- add a test to an appropriate file in roast, verify it passes, push it.
- comment on the ticket noting where the test is
- If possible, close the ticket; if it's not possible, and you want to be able to close tickets, ask in #perl6 on freenode
- look in roast for #?rakudo todo/skip markers
- find one that doesn't say "unspecced", "NYI", or already have an "RT #xxx" comment
- see "Fudge a Failing Test" above, but in this case, just update the existing line, not the whole file
- checkout/build rakudo
- run perl tools/update_passing_test_data.pl
- examine the output for tests that (mostly) pass
- fudge them (opening tickets as described above, or using "NYI" if applicable.
- occasionally, we find the test is outdated. If so, remove it from roast.
- Add the file to t/spectest.data in rakudo
- push updates to rakudo, roast.
- get a threaded build of perl5 (easy with perlbrew)
- in rakudo, run tools/autounfudge.pl (might want to run only one syn at a time, or leave this running on a fast multicore box)
- carefully examine the resulting diff file; might be able to unfudge some items entirely (but watch out where the comment indicates otherwise)
- Might only be able to unfudge for your given rakudo backend.
- verify unfudged tests pass, push updates to roast.
- build a copy of nqp-HEAD
- prove -v t/docs/opcodes.t
- Take any of the failing tests, find the opcode that isn't documented
- figure out what it does
- add docs to docs/ops.markdown
This is much less likely to be an issue post Christmas.
- Checkout a copy of rakudo, run "make spectest"
- Find a failing test
- Open a rakudobug describing the failure
- fudge the failing test with "RT #xxx" somewhere in the description