On 10/6/21 3:35 PM, Leonard Crestez wrote:
I counted the [FAIL] or [ OK ] markers but not the output of nettest itself. I don't know what to look for, I guess I could diff the outputs?
Shouldn't it be sufficient to compare the exit codes of the nettest client?
mistakes happen. The 700+ tests that exist were verified by me when I submitted the script - that each test passes when it should and fails when it should. "FAIL" has many reasons. I tried to have separate exit codes for nettest.c to capture the timeouts vs ECONNREFUSED, etc., but I could easily have made a mistake. scanning the output is the best way. Most of the 'supposed to fail' tests have a HINT saying why it should fail.
The output is also modified by a previous change to not capture server output separately and instead let it be combined with that of the client. That change is required for this one, doing out=$(nettest -k) does not return on fork unless the pipe is also closed.
I did not look at your change, mine is relatively minimal because it only changes who decide when the server goes into the background: the shell script or the server itself. This makes it work very easily even for tests with multiple server instances.
The logging issue is why I went with 1 binary do both server and client after nettest.c got support for changing namespaces.