On Wed, Jun 17, 2020 at 02:30:45AM +0000, Bird, Tim wrote:
Agreed. You only need machine-parsable data if you expect the CI system to do something more with the data than just present it. What that would be, that would be common for all tests (or at least many test), is unclear. Maybe there are patterns in the diagnostic data that could lead to higher-level analysis, or even automated fixes, that don't become apparent if the data is unstructured. But it's hard to know until you have lots of data. I think just getting the other things consistent is a good priority right now.
Yeah. I think the main place for this is performance analysis, but I think that's a separate system entirely. TAP is really strictly yes/no, where as performance analysis a whole other thing. The only other thing I can think of is some kind of feature analysis, but that would be built out of the standard yes/no output. i.e. if I create a test that checks for specific security mitigation features (*cough*LKDTM*cough*), having a dashboard that shows features down one axis and architectures and/or kernel versions on other axes, then I get a pretty picture. But it's still being built out of the yes/no info.
*shrug*
I think diagnostic should be expressly non-machine-oriented.