Making Objectivity Work Harder: Text, Tools and Fuzzing (R20c)
We have been making great efforts to increase objectivity in evaluation requirements and methodology over the past few years. However, we find there are places where subjectivity still has to remain, and recently we have seen how our earlier attempts to improve objectivity through being more and more specific can even be counter-productive. So this presentation will look at what benefits improved objectivity can bring to all the players in an evaluation: developers, evaluators, certifiers/validators, and of course users. This is tied to the methods we use to achieve the objectivity, and the presentation will discuss examples applied not only to design requirements and analysis, but also to the use of scripts, interface modeling and constraint description in test automation and fuzzing.