Next Previous Contents

3. Before the contest

Before the contest starts, a number of things will need to be configured by the administrator. You can check that information, such as the problem set(s), test data and time limits, contest start- and end time, the time at which the scoreboard will be frozen and unfrozen, all from the links from the front page.

Note that multiple contests can be defined, with corresponding problem sets, for example a practice session and the real contest.

3.1 Problems and languages

The problem sets are listed under `Problems'. It is possible to change whether teams can submit solutions for that problem (using the toggle switch `allow submit'). If disallowed, submissions for that problem will be rejected, but more importantly, teams will not see that problem on the scoreboard. Disallow judge will make DOMjudge accept submissions, but leave them queued; this is useful in case an unexpected problem shows up with one of the problems. Timelimit is the maximum number of seconds a submission for this problem is allowed to run before a `TIMELIMIT' response is given (to be multiplied possibly by a language factor). Note that a `timelimit overshoot' can be configured to let submissions run a bit longer. Although DOMjudge will use the actual limit to determine the verdict, this allows judges to see if a submission is close to the timelimit.

Problems can be imported and exported into and from DOMjudge using zip-files that contain the problem metadata and testdata files, based on the problemarchive.org format. See appendix Problem package format -specification for details. Problems can have special compare and run scripts associated to them, to deal with problem statements that require non-standard evaluation. For more details see the administrator's manual.

The `Languages' overview is quite the same. It has a timefactor column; submissions in a language that has time factor 2 will be allowed to run twice the time that has been specified under Problems. This can be used to compensate for the execution speed of a language, e.g. Java.

3.2 Verifying testdata

For checking whether the your testdata conforms to the specifications of your problem statement, we recommend the checktestdata program, which is available from a separate repository. It allows you to not only check on simple (spacing) layout errors, but a simple grammar file must be specified for the testdata, according to which the testdata is checked. This allows e.g. for bounds checking.

This program is built upon the separate library libchecktestdata.h that can be used to write the syntax checking part of special compare scripts: it can easily handle the tedious task of verifying that a team's submission output is syntactically valid, leaving just the task of semantic validation to another program.

3.3 Testing jury solutions

Before a contest, you will want to have tested your reference solutions on the system to see whether those are judged as expected and maybe use their runtimes to set timelimits for the problems.

The simplest way to do this is to include the jury solutions in a problem zip file and upload this. You can also upload a zip file containing just solutions to an existing problem. Note that the zip archive has to adhere to the Kattis problem package format. For this to work, the jury/admin who uploads the problem has to have an associated team to which the solutions will be assigned. The solutions will automatically be judged if the contest is active (but it need not have started yet). You can verify whether the submissions gave the expected answer from the link on the jury/admin index page.

3.4 Practice Session

If your contest has a test session or practice contest, use it also as a general rehearsal of the jury system: judge test submissions as you would do during the real contest and answer incoming clarification requests.


Next Previous Contents