This is a phase that is applicable in our project, but not necessarily in other software engineering development processes. We can now utilise batch facilities to have the program play itself. Most importantly, if we have written different game-playing algorithms (corresponding to different approaches), we can now gather some game statistics which will indicate which approach is stronger than another or which seems to have special weaknesses against specific other approaches. if simulation time allows, we can extend this series of tests7 by parameterisation of the existing algorithms. Fine-tuning of those algorithms is now applicable, but more on this will be discussed in the final project report.
We may have to conduct a more complex analysis that will produce graphs and tables. These will also be valuable for the later project report. Some research can take place on performance and strength of certain Othello-playing algorithms, as the end of this section will discuss in greater detail.
Having performed these analyses, we can now have some vague conclusion as for which approaches are stronger than others. Since the interface to the end-user should not bog down to details of implementation, linking each approach to some word that will indicate a difficulty level, will be a wise step. The allegedly best algorithm that we have put together should, for instance, be titled 'Master'.
To determine which algorithms appear to perform better than others within a game of Othello and to adjust or fine-tune those appropriately, we will certainly have to re-write some code. This will require a long repetitive process of testing and re-implementing the code. In fact, a testing-implementation loop will consume a considerable amount of time at this phase.
As for the aspect of research, we can derive from the above figures and logs (or possibly even charts):