Automatic checks for all goals (portability, accessibility, speed…) #50

Open
opened 2019-02-16 15:01:10 +00:00 by SuzanneSoy · 0 comments
SuzanneSoy commented 2019-02-16 15:01:10 +00:00 (Migrated from github.com)
  • portability (test on many platforms)
  • reproducible builds (on different hosts: debian, fedora, macos, windows)
  • algorithmic complexity (run with many input sizes and check with curve fitting that each algorithm has the expected time and space complexity)
  • speed / response time (virtual wall clock for emulated systems)
  • test coverage
  • GUI test coverage (steps to reach some interface, from there all widgets must be exercised)
  • Accessibility coverage (check that all features can be reached using only sound+keyboard, only braille+keyboard, and using a zoomed interface+inverted colors+mouse
  • documentation coverage (guide, reference, interactive usage examples with "show me" button, tutorial/blog article explaining how to write the component)
  • upgrades of documents to a newer version of the document type
  • downgrades of documents to an older version of the document type. The downgraded document should be handled correctly (unknown features are moved / copied around as much as possible, where features are tagged to indicate if they should be unique (don't copy), copied, and so on.
  • number of reviews made by humans: for document types, tools, views, algorithms, GUI, documentation, …
  • Check for low redundancy in code? Rename all identifiers to x1, x2, x3… (or serialize a symbolic representation of the AST), then compress the source code and see that the compression ratio is not better than a certain threshold.
  • subjective measure of code quality / clarity / maintainability, judged by a human
  • check for code smells like tight coupling…
  • formal proofs expected or possible for some of the above
- [ ] portability (test on many platforms) - [ ] reproducible builds (on different hosts: debian, fedora, macos, windows) - [ ] algorithmic complexity (run with many input sizes and check with curve fitting that each algorithm has the expected time and space complexity) - [ ] speed / response time (virtual wall clock for emulated systems) - https://stackoverflow.com/q/40431736 - https://wiki.qemu.org/Features/record-replay - [ ] test coverage - [ ] GUI test coverage (steps to reach some interface, from there all widgets must be exercised) - [ ] Accessibility coverage (check that all features can be reached using only sound+keyboard, only braille+keyboard, and using a zoomed interface+inverted colors+mouse - [ ] documentation coverage (guide, reference, interactive usage examples with "show me" button, tutorial/blog article explaining how to write the component) - [ ] upgrades of documents to a newer version of the document type - [ ] downgrades of documents to an older version of the document type. The downgraded document should be handled correctly (unknown features are moved / copied around as much as possible, where features are tagged to indicate if they should be unique (don't copy), copied, and so on. - [ ] number of reviews made by humans: for document types, tools, views, algorithms, GUI, documentation, … - [ ] Check for low redundancy in code? Rename all identifiers to `x1`, `x2`, `x3`… (or serialize a symbolic representation of the AST), then compress the source code and see that the compression ratio is not better than a certain threshold. - [ ] subjective measure of code quality / clarity / maintainability, judged by a human - [ ] check for code smells like tight coupling… - [ ] formal proofs expected or possible for some of the above
Sign in to join this conversation.
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: suzanne.soy/os-test-framework#50
No description provided.