These tests are used to test the cpufreq driver on ARM architecture. As the cpufreq is not yet complete, the test suite is based on the cpufreq sysfs API exported on intel architecture, assuming it is consistent across architecture.
The different tests are described at:
https://wiki.linaro.org/WorkingGroups/PowerManagement/Doc/QA/Scripts
Each test's header contains an URL to the anchor related item of this web page describing the script.
Daniel Lezcano (2): cpufreq: add a test set for cpufreq cpufreq: check the frequency affect the performances
Makefile | 6 ++ cpufreq/test_01.sh | 43 ++++++++++ cpufreq/test_02.sh | 43 ++++++++++ cpufreq/test_03.sh | 64 ++++++++++++++ cpufreq/test_04.sh | 85 +++++++++++++++++++ cpufreq/test_05.sh | 145 ++++++++++++++++++++++++++++++++ cpufreq/test_06.sh | 236 ++++++++++++++++++++++++++++++++++++++++++++++++++++ utils/Makefile | 11 +++ utils/cpucycle.c | 102 ++++++++++++++++++++++ utils/nanosleep.c | 45 ++++++++++ 10 files changed, 780 insertions(+), 0 deletions(-) create mode 100644 cpufreq/test_01.sh create mode 100644 cpufreq/test_02.sh create mode 100644 cpufreq/test_03.sh create mode 100644 cpufreq/test_04.sh create mode 100644 cpufreq/test_05.sh create mode 100644 cpufreq/test_06.sh create mode 100644 utils/Makefile create mode 100644 utils/cpucycle.c create mode 100644 utils/nanosleep.c
These tests are described at:
https://wiki.linaro.org/WorkingGroups/PowerManagement/Doc/QA/Scripts
Signed-off-by: Daniel Lezcano daniel.lezcano@linaro.org --- Makefile | 6 ++ cpufreq/test_01.sh | 43 +++++++++++++++ cpufreq/test_02.sh | 43 +++++++++++++++ cpufreq/test_03.sh | 64 +++++++++++++++++++++++ cpufreq/test_04.sh | 85 ++++++++++++++++++++++++++++++ cpufreq/test_05.sh | 145 ++++++++++++++++++++++++++++++++++++++++++++++++++++ utils/Makefile | 19 +++++++ utils/nanosleep.c | 45 ++++++++++++++++ 8 files changed, 450 insertions(+), 0 deletions(-) create mode 100644 cpufreq/test_01.sh create mode 100644 cpufreq/test_02.sh create mode 100644 cpufreq/test_03.sh create mode 100644 cpufreq/test_04.sh create mode 100644 cpufreq/test_05.sh create mode 100644 utils/Makefile create mode 100644 utils/nanosleep.c
diff --git a/Makefile b/Makefile index 73d1f66..16f17b8 100644 --- a/Makefile +++ b/Makefile @@ -14,8 +14,14 @@ #******************************************************************************/
all: + @(cd utils; $(MAKE)) @(cd testcases; $(MAKE) all)
+check: + @(cd utils; $(MAKE) check) + @(cd cpufreq; $(MAKE) check) + clean: + @(cd utils; $(MAKE) clean) @(cd testcases; $(MAKE) clean)
diff --git a/cpufreq/test_01.sh b/cpufreq/test_01.sh new file mode 100644 index 0000000..38c6353 --- /dev/null +++ b/cpufreq/test_01.sh @@ -0,0 +1,43 @@ +#!/bin/bash +#/******************************************************************************* +# Copyright (C) 2011, Linaro Limited. +# +# This file is part of PM QA. +# +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the Eclipse Public License v1.0 +# which accompanies this distribution, and is available at +# http://www.eclipse.org/legal/epl-v10.html +# +# Contributors: +# Daniel Lezcano daniel.lezcano@linaro.org (IBM Corporation) +# - initial API and implementation +#******************************************************************************/ + +# URL : https://wiki.linaro.org/WorkingGroups/PowerManagement/Doc/QA/Scripts#test_01 + +CPU_PATH="/sys/devices/system/cpu" +FILES="scaling_available_frequencies scaling_cur_freq scaling_setspeed" + +check_file() { + + for i in $FILES; do + + printf "%-70s" "checking $i file presence ... " + + if [ ! -f $1/$i ] ; then + printf "FAIL\n" + return 1; + fi + + printf "PASS\n" + + done + + return 0; +} + +for i in $(ls $CPU_PATH | grep "cpu[0-9].*"); do + printf "for '$i':\n" + check_file $CPU_PATH/$i/cpufreq || exit 1; +done diff --git a/cpufreq/test_02.sh b/cpufreq/test_02.sh new file mode 100644 index 0000000..69239e1 --- /dev/null +++ b/cpufreq/test_02.sh @@ -0,0 +1,43 @@ +#!/bin/bash +#/******************************************************************************* +# Copyright (C) 2011, Linaro Limited. +# +# This file is part of PM QA. +# +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the Eclipse Public License v1.0 +# which accompanies this distribution, and is available at +# http://www.eclipse.org/legal/epl-v10.html +# +# Contributors: +# Daniel Lezcano daniel.lezcano@linaro.org (IBM Corporation) +# - initial API and implementation +#******************************************************************************/ + +# URL : https://wiki.linaro.org/WorkingGroups/PowerManagement/Doc/QA/Scripts#test_02 + +CPU_PATH="/sys/devices/system/cpu" +FILES="scaling_available_governors scaling_governor" + +check_file() { + + for i in $FILES; do + + printf "%-70s" "checking $i file presence ... " + + if [ ! -f $1/$i ] ; then + printf "FAIL\n" + return 1; + fi + + printf "PASS\n" + + done + + return 0; +} + +for i in $(ls $CPU_PATH | grep "cpu[0-9].*"); do + printf "for '$i':\n" + check_file $CPU_PATH/$i/cpufreq || exit 1; +done diff --git a/cpufreq/test_03.sh b/cpufreq/test_03.sh new file mode 100644 index 0000000..53a22c6 --- /dev/null +++ b/cpufreq/test_03.sh @@ -0,0 +1,64 @@ +#!/bin/bash +#/******************************************************************************* +# Copyright (C) 2011, Linaro Limited. +# +# This file is part of PM QA. +# +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the Eclipse Public License v1.0 +# which accompanies this distribution, and is available at +# http://www.eclipse.org/legal/epl-v10.html +# +# Contributors: +# Daniel Lezcano daniel.lezcano@linaro.org (IBM Corporation) +# - initial API and implementation +#******************************************************************************/ + +# URL : https://wiki.linaro.org/WorkingGroups/PowerManagement/Doc/QA/Scripts#test_03 +CPU_PATH="/sys/devices/system/cpu" + +check_governor_change() { + + oldgov=$(cat $1/scaling_governor) + + for i in $(cat $1/scaling_available_governors); do + + printf "%-70s" "setting governor to $i ... " + echo $i > $1/scaling_governor + + curgov=$(cat $1/scaling_governor) + if [ "$curgov" != "$i" ]; then + printf "FAIL\n" + echo $oldgov > $1/scaling_governor + return 1 + fi + + printf "PASS\n" + + done + + echo $oldgov > $1/scaling_governor + + return 0 +} + +check_root() { + + printf "%-70s" "checking if we are root ... " + + if [ "$(id -u)" != "0" ]; then + printf "FAIL\n" + return 1 + fi + + printf "PASS\n" + + return 0 +} + +check_root || exit 1 + +for i in $(ls $CPU_PATH | grep "cpu[0-9].*"); do + printf "for '$i':\n" + check_governor_change $CPU_PATH/$i/cpufreq || exit 1; +done diff --git a/cpufreq/test_04.sh b/cpufreq/test_04.sh new file mode 100644 index 0000000..aec33e8 --- /dev/null +++ b/cpufreq/test_04.sh @@ -0,0 +1,85 @@ +#!/bin/bash +#/******************************************************************************* +# Copyright (C) 2011, Linaro Limited. +# +# This file is part of PM QA. +# +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the Eclipse Public License v1.0 +# which accompanies this distribution, and is available at +# http://www.eclipse.org/legal/epl-v10.html +# +# Contributors: +# Daniel Lezcano daniel.lezcano@linaro.org (IBM Corporation) +# - initial API and implementation +#******************************************************************************/ + +# URL : https://wiki.linaro.org/WorkingGroups/PowerManagement/Doc/QA/Scripts#test_04 +CPU_PATH="/sys/devices/system/cpu" + +check_frequency_change() { + + oldgov=$(cat $1/scaling_governor) + oldfreq=$(cat $1/scaling_cur_freq) + + printf "%-70s" "setting governor to 'userspace' ... " + echo userspace > $1/scaling_governor + curgov=$(cat $1/scaling_governor) + if [ "$curgov" != "userspace" ]; then + printf "FAIL\n" + echo $oldgov > $1/scaling_governor + return 1 + fi + + printf "PASS\n" + + freqs=$(cat $1/scaling_available_frequencies) + nrfreqs=$(cat $1/scaling_available_frequencies | wc -w) + latency=$(cat $1/cpuinfo_transition_latency) + + for i in $freqs; do + + printf "%-70s" "setting frequency to $i KHz ... " + echo $i > $1/scaling_setspeed + + # wait the latency period + ../utils/nanosleep $((nrfreqs * latency)) + + curfreq=$(cat $1/scaling_cur_freq) + if [ "$curfreq" != "$i" ]; then + printf "FAIL\n" + echo $oldfreq > $1/scaling_setspeed + echo $oldgov > $1/scaling_governor + return 1 + fi + + printf "PASS\n" + + done + + echo $oldfreq > $1/scaling_setspeed + echo $oldgov > $1/scaling_governor + + return 0 +} + +check_root() { + + printf "%-70s" "checking if we are root ... " + + if [ "$(id -u)" != "0" ]; then + printf "FAIL\n" + return 1 + fi + + printf "PASS\n" + + return 0 +} + +check_root || exit 1 + +for i in $(ls $CPU_PATH | grep "cpu[0-9].*"); do + printf "for '$i':\n" + check_frequency_change $CPU_PATH/$i/cpufreq || exit 1; +done diff --git a/cpufreq/test_05.sh b/cpufreq/test_05.sh new file mode 100644 index 0000000..4cc48f6 --- /dev/null +++ b/cpufreq/test_05.sh @@ -0,0 +1,145 @@ +#!/bin/bash +#/******************************************************************************* +# Copyright (C) 2011, Linaro Limited. +# +# This file is part of PM QA. +# +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the Eclipse Public License v1.0 +# which accompanies this distribution, and is available at +# http://www.eclipse.org/legal/epl-v10.html +# +# Contributors: +# Daniel Lezcano daniel.lezcano@linaro.org (IBM Corporation) +# - initial API and implementation +#******************************************************************************/ + +# URL : https://wiki.linaro.org/WorkingGroups/PowerManagement/Doc/QA/Scripts#test_05 +CPU_PATH="/sys/devices/system/cpu" + +oldgovs= + +save_governors() { + index=0 + for i in $(ls $CPU_PATH | grep "cpu[0-9].*"); do + printf "%-70s" "saving governor for $i ... " + oldgovs[$index]=$(cat $CPU_PATH/$i/cpufreq/scaling_governor) + printf "DONE\n" + index=$((index + 1)) + done +} + +restore_governors() { + index=0 + for i in $(ls $CPU_PATH | grep "cpu[0-9].*"); do + oldgov=${oldgovs[$index]} + printf "%-70s" "restoring governor '$oldgov' for $i ... " + echo $oldgov > $CPU_PATH/$i/cpufreq/scaling_governor + printf "DONE\n" + index=$((index + 1)) + done +} + +exit_fail() { + restore_governors + exit 1 +} + +switch_governor() { + + newgov=$2 + oldgov=$(cat $1/scaling_governor) + + printf "%-70s" "setting governor to '$newgov' ... " + echo $newgov > $1/scaling_governor + curgov=$(cat $1/scaling_governor) + if [ "$curgov" != "$newgov" ]; then + printf "FAIL\n" + echo $oldgov > $1/scaling_governor + return 1 + fi + + printf "PASS\n" + + return 0 +} + +check_cpufreq_governor() { + + path=$CPU_PATH/cpufreq/$1 + printf "%-70s" "checking '$1' configuration directory ... " + if [ ! -d $CPU_PATH/cpufreq/$1 ]; then + printf "FAIL\n" + return 1 + fi + + printf "PASS\n" + + return 0 +} + +check_root() { + + printf "%-70s" "checking if we are root ... " + + if [ "$(id -u)" != "0" ]; then + printf "FAIL\n" + return 1 + fi + + printf "PASS\n" + + return 0 +} + +check_root || exit 1 + +save_governors || exit 1 + +trap exit_fail SIGHUP SIGINT SIGTERM + +for i in $(ls $CPU_PATH | grep "cpu[0-9].*"); do + printf "for '$i':\n" + switch_governor $CPU_PATH/$i/cpufreq ondemand || exit_fail; +done + +check_cpufreq_governor ondemand || exit_fail + +for i in $(ls $CPU_PATH | grep "cpu[0-9].*"); do + printf "for '$i':\n" + switch_governor $CPU_PATH/$i/cpufreq conservative || exit_fail; +done + +check_cpufreq_governor conservative || exit_fail + +for i in $(ls $CPU_PATH | grep "cpu[0-9].*"); do + printf "for '$i':\n" + switch_governor $CPU_PATH/$i/cpufreq userspace || exit_fail; +done + +# the following functions should fail as all cpu are in 'userspace' +# governor mode. The specified directories should not be present +printf "%-70s" "checking 'ondemand' configuration directory is not there ... " +if [ -d $CPU_PATH/cpufreq/ondemand ]; then + printf "FAIL\n" && exit_fail +fi +printf "PASS\n" + +printf "%-70s" "checking 'conservative' configuration directory is not there ... " +if [ -d $CPU_PATH/cpufreq/conservative ]; then + printf "FAIL\n" && exit_fail +fi +printf "PASS\n" + +# if more than one cpu, combine governors +nrcpus=$(ls $CPU_PATH | grep "cpu[0-9].*" | wc -l) +if [ $nrcpus > 0 ]; then + printf "for 'cpu0':\n" + switch_governor $CPU_PATH/cpu0/cpufreq ondemand || exit_fail; + printf "for 'cpu1':\n" + switch_governor $CPU_PATH/cpu1/cpufreq conservative || exit_fail; + check_cpufreq_governor ondemand || exit_fail + check_cpufreq_governor conservative || exit_fail +fi + +restore_governors diff --git a/utils/Makefile b/utils/Makefile new file mode 100644 index 0000000..ebe9856 --- /dev/null +++ b/utils/Makefile @@ -0,0 +1,19 @@ +CFLAGS?=-g -Wall +CC?=gcc +EXEC=nanosleep +SRC=$(wildcard *.c) +OBJ= $(SRC:.c=.o) + +check: $(EXEC) + +nanosleep: $(OBJ) + @$(CC) -o $@ $^ $(LDFLAGS) + +%.o: %.c + $(CC) -o $@ -c $< $(CFLAGS) + +clean: + rm *.o + +mrproper: clean + rm $(EXEC) diff --git a/utils/nanosleep.c b/utils/nanosleep.c new file mode 100644 index 0000000..2ca17d5 --- /dev/null +++ b/utils/nanosleep.c @@ -0,0 +1,45 @@ +/******************************************************************************* + * Copyright (C) 2011, Linaro Limited. + * + * This file is part of PM QA + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Daniel Lezcano daniel.lezcano@linaro.org (IBM Corporation) + * - initial API and implementation + *******************************************************************************/ +#include <stdio.h> +#include <stdlib.h> +#include <errno.h> +#include <time.h> + +int main(int argc, char *argv[]) +{ + struct timespec req = { 0 }, rem = { 0 }; + + if (argc != 2) { + fprintf(stderr, "%s <nanoseconds>\n", argv[0]); + return 1; + } + + req.tv_nsec = atoi(argv[1]); + + for (;;) { + if (!nanosleep(&req, &rem)) + break; + + if (errno == EINTR) { + req = rem; + continue; + } + + perror("failed to nanosleep"); + return 1; + } + + return 0; +}
This test program increase the frequency for each cpu and launch the cpucycle program.
A computation is made to check the deviation of the ratio (counter / frequency) is consistent between the frequencies.
More informations can be found at:
https://wiki.linaro.org/WorkingGroups/PowerManagement/Doc/QA/Scripts#test_06
Signed-off-by: Daniel Lezcano daniel.lezcano@linaro.org --- cpufreq/test_06.sh | 236 ++++++++++++++++++++++++++++++++++++++++++++++++++++ utils/Makefile | 14 +--- utils/cpucycle.c | 102 ++++++++++++++++++++++ 3 files changed, 341 insertions(+), 11 deletions(-) create mode 100644 cpufreq/test_06.sh create mode 100644 utils/cpucycle.c
diff --git a/cpufreq/test_06.sh b/cpufreq/test_06.sh new file mode 100644 index 0000000..e12487f --- /dev/null +++ b/cpufreq/test_06.sh @@ -0,0 +1,236 @@ +#!/bin/bash +#/******************************************************************************* +# Copyright (C) 2011, Linaro Limited. +# +# This file is part of PM QA. +# +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the Eclipse Public License v1.0 +# which accompanies this distribution, and is available at +# http://www.eclipse.org/legal/epl-v10.html +# +# Contributors: +# Daniel Lezcano daniel.lezcano@linaro.org (IBM Corporation) +# - initial API and implementation +#******************************************************************************/ + +# URL : https://wiki.linaro.org/WorkingGroups/PowerManagement/Doc/QA/Scripts#test_06 +CPU_PATH="/sys/devices/system/cpu" +CPUCYCLE=../utils/cpucycle +oldfreqs= +oldgovs= + +save_frequencies() { + index=0 + for i in $(ls $CPU_PATH | grep "cpu[0-9].*"); do + printf "%-70s" "saving frequency for $i ... " + oldfreqs[$index]=$(cat $CPU_PATH/$i/cpufreq/scaling_cur_speed) + printf "DONE\n" + index=$((index + 1)) + done +} + +restore_frequencies() { + index=0 + for i in $(ls $CPU_PATH | grep "cpu[0-9].*"); do + oldfreq=${oldfreqs[$index]} + printf "%-70s" "restoring frequency '$oldfreq' for $i ... " + echo $oldfreq > $CPU_PATH/$i/cpufreq/scaling_setspeed + printf "DONE\n" + index=$((index + 1)) + done +} + +save_governors() { + index=0 + for i in $(ls $CPU_PATH | grep "cpu[0-9].*"); do + printf "%-70s" "saving governor for $i ... " + oldgovs[$index]=$(cat $CPU_PATH/$i/cpufreq/scaling_governor) + printf "DONE\n" + index=$((index + 1)) + done +} + +restore_governors() { + index=0 + for i in $(ls $CPU_PATH | grep "cpu[0-9].*"); do + oldgov=${oldgovs[$index]} + printf "%-70s" "restoring governor '$oldgov' for $i ... " + echo $oldgov > $CPU_PATH/$i/cpufreq/scaling_governor + printf "DONE\n" + index=$((index + 1)) + done +} + +switch_governor() { + + newgov=$2 + oldgov=$(cat $1/scaling_governor) + + printf "%-70s" "setting governor to '$newgov' ... " + echo $newgov > $1/scaling_governor + curgov=$(cat $1/scaling_governor) + if [ "$curgov" != "$newgov" ]; then + printf "FAIL\n" + echo $oldgov > $1/scaling_governor + return 1 + fi + + printf "PASS\n" + + return 0 +} + +check_frequency_perf() { + + frequencies=$(cat $1/scaling_available_frequencies) + index=0 + cpu=$2 + + # for each frequency check a change of performances + for i in $frequencies; do + + switch_frequency $1 $i || return 1 + + printf "%-70s" "running 'cpucycle' on $cpu at '$i' KHz ... " + + result=$($CPUCYCLE $cpu) + if [ $? != 0 ]; then + printf "FAIL\n" + return 1 + fi + + printf "PASS\n" + + results[$index]=$(echo "scale=3;($result / $i)" | bc -l) + index=$((index + 1)) + + done + + index=0 + sum=0 + + for i in $frequencies; do + res=${results[$index]} + sum=$(echo "($sum + $res)" | bc -l) + index=$((index + 1)) + done + + avg=$(echo "scale=3;($sum / $index)" | bc -l) + + index=0 + + for i in $frequencies; do + + res=${results[$index]} + + # compute deviation + dev=$(echo "scale=3;((( $res - $avg ) / $avg) * 100 )" | bc -l) + + # change to absolute + dev=$(echo $dev | awk '{ print ($1 >= 0) ? $1 : 0 - $1}') + + index=$((index + 1)) + + printf "%-70s" "deviation $dev % for $i is ... " + + res=$(echo "($dev <= 0.5)" | bc -l) + if [ "$res" = "1" ]; then + printf "VERY GOOD\n" + continue + fi + + res=$(echo "($dev <= 1.0)" | bc -l) + if [ "$res" = "1" ]; then + printf "GOOD\n" + continue + fi + + res=$(echo "($dev <= 2.0)" | bc -l) + if [ "$res" = "1" ]; then + printf "BAD\n" + continue + fi + + res=$(echo "($dev <= 5.0)" | bc -l) + if [ "$res" = "1" ]; then + printf "VERY BAD\n" + continue + fi + + res=$(echo "($dev <= 6.0)" | bc -l) + if [ "$res" = "1" ]; then + printf "SUSPECT\n" + continue + fi + + res=$(echo "($dev > 6.0)" | bc -l) + if [ "$res" = "1" ]; then + printf "BOGUS\n" + return 1 + fi + done + + return 0 +} + +switch_frequency() { + + oldfreq=$(cat $1/scaling_cur_freq) + newfreq=$2 + + nrfreqs=$(cat $1/scaling_available_frequencies | wc -w) + latency=$(cat $1/cpuinfo_transition_latency) + + printf "%-70s" "setting frequency to $2 KHz ... " + echo $2 > $1/scaling_setspeed + + # wait the latency period + ../utils/nanosleep $((nrfreqs * latency)) + + curfreq=$(cat $1/scaling_cur_freq) + if [ "$curfreq" != "$2" ]; then + printf "FAIL\n" + echo $oldfreq > $1/scaling_setspeed + return 1 + fi + + printf "PASS\n" + + return 0 +} + +exit_fail() { + restore_governors + exit 1 +} + +check_root() { + + printf "%-70s" "checking if we are root ... " + + if [ "$(id -u)" != "0" ]; then + printf "FAIL\n" + return 1 + fi + + printf "PASS\n" + + return 0 +} + +check_root || exit 1 + +save_governors || exit 1 + +trap exit_fail SIGHUP SIGINT SIGTERM + +for i in $(ls $CPU_PATH | grep "cpu[0-9].*"); do + + printf "for '$i':\n" + switch_governor $CPU_PATH/$i/cpufreq userspace || exit_fail + check_frequency_perf $CPU_PATH/$i/cpufreq $i || exit_fail + +done + +restore_governors || exit 1 diff --git a/utils/Makefile b/utils/Makefile index ebe9856..b3e206c 100644 --- a/utils/Makefile +++ b/utils/Makefile @@ -1,19 +1,11 @@ CFLAGS?=-g -Wall CC?=gcc -EXEC=nanosleep SRC=$(wildcard *.c) -OBJ= $(SRC:.c=.o) +EXEC=$(SRC:%.c=%)
check: $(EXEC)
-nanosleep: $(OBJ) - @$(CC) -o $@ $^ $(LDFLAGS) - -%.o: %.c - $(CC) -o $@ -c $< $(CFLAGS) +all: $(EXEC)
clean: - rm *.o - -mrproper: clean - rm $(EXEC) + rm -f *.o $(EXEC) diff --git a/utils/cpucycle.c b/utils/cpucycle.c new file mode 100644 index 0000000..e092c0a --- /dev/null +++ b/utils/cpucycle.c @@ -0,0 +1,102 @@ +/******************************************************************************* + * Copyright (C) 2011, Linaro Limited. + * + * This file is part of PM QA + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Daniel Lezcano daniel.lezcano@linaro.org (IBM Corporation) + * - initial API and implementation + *******************************************************************************/ +#define _GNU_SOURCE +#include <sched.h> +#include <stdio.h> +#include <stdlib.h> +#include <stdbool.h> +#include <signal.h> +#include <unistd.h> +#include <regex.h> +#include <sys/types.h> +#include <sys/time.h> +#include <sys/resource.h> + +static bool intr; + +void sigalarm(int sig) +{ + intr = true; +} + +int main(int argc, char *argv[]) +{ + regex_t reg; + const char *regex = "cpu[0-9].*"; + char **aargv = NULL; + regmatch_t m[1]; + cpu_set_t cpuset; + long counter = 0; + int i; + + if (argc == 1) { + fprintf(stderr, "%s <cpuN> [cpuM] ... \n", argv[0]); + return 1; + } + + aargv = &argv[1]; + + if (regcomp(®, regex, 0)) { + fprintf(stderr, "failed to compile the regex\n"); + return 1; + } + + CPU_ZERO(&cpuset); + + for (i = 0; i < (argc - 1); i++) { + + char *aux; + int cpu; + + if (regexec(®, aargv[i], 1, m, 0)) { + fprintf(stderr, "'%s' parameter not recognized, " \ + "should be cpu[0-9]\n", aargv[i]); + return 1; + } + + aux = aargv[i] + 3; + cpu = atoi(aux); + + CPU_SET(cpu, &cpuset); + } + + if (sched_setaffinity(0, sizeof(cpuset), &cpuset)) { + perror("sched_setaffinity"); + return 1; + } + + if (setpriority(PRIO_PROCESS, 0, -20) < 0) { + perror("setpriority"); + return 1; + } + + signal(SIGALRM, sigalarm); + + alarm(1); + /* warmup */ + for (; !intr ;) + counter++; + + counter = 0; + intr = false; + + alarm(1); + for (; !intr ;) + counter++; + + printf("%ld\n", counter); + + return 0; +}
On 11 Jun 30, Daniel Lezcano wrote:
These tests are used to test the cpufreq driver on ARM architecture. As the cpufreq is not yet complete, the test suite is based on the cpufreq sysfs API exported on intel architecture, assuming it is consistent across architecture.
The different tests are described at:
https://wiki.linaro.org/WorkingGroups/PowerManagement/Doc/QA/Scripts
Each test's header contains an URL to the anchor related item of this web page describing the script.
Daniel Lezcano (2): cpufreq: add a test set for cpufreq cpufreq: check the frequency affect the performances
Makefile | 6 ++ cpufreq/test_01.sh | 43 ++++++++++ cpufreq/test_02.sh | 43 ++++++++++ cpufreq/test_03.sh | 64 ++++++++++++++ cpufreq/test_04.sh | 85 +++++++++++++++++++ cpufreq/test_05.sh | 145 ++++++++++++++++++++++++++++++++ cpufreq/test_06.sh | 236 ++++++++++++++++++++++++++++++++++++++++++++++++++++ utils/Makefile | 11 +++ utils/cpucycle.c | 102 ++++++++++++++++++++++ utils/nanosleep.c | 45 ++++++++++ 10 files changed, 780 insertions(+), 0 deletions(-) create mode 100644 cpufreq/test_01.sh create mode 100644 cpufreq/test_02.sh create mode 100644 cpufreq/test_03.sh create mode 100644 cpufreq/test_04.sh create mode 100644 cpufreq/test_05.sh create mode 100644 cpufreq/test_06.sh create mode 100644 utils/Makefile create mode 100644 utils/cpucycle.c create mode 100644 utils/nanosleep.c
Daniel,
The scripts themselves look ok and the documentation is excellent. But I'm in two minds regarding the location of the documentation though.
While having it on a wiki in the current form allows people w/o access to the code to browse through the docs, it means that the 'comments' are at a different place from the code.
But as long as you're confident this works for you, I'm ok with it.
Once youv've published the tree, please get in touch with Paul Larson to start pulling these scripts into LAVA. I'd like to see what the reports would look like.
Cheers, Amit
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 06/30/2011 11:09 AM, Amit Kucheria wrote:
On 11 Jun 30, Daniel Lezcano wrote:
These tests are used to test the cpufreq driver on ARM architecture. As the cpufreq is not yet complete, the test suite is based on the cpufreq sysfs API exported on intel architecture, assuming it is consistent across architecture.
The different tests are described at:
https://wiki.linaro.org/WorkingGroups/PowerManagement/Doc/QA/Scripts
Each test's header contains an URL to the anchor related item of this web page describing the script.
Daniel Lezcano (2): cpufreq: add a test set for cpufreq cpufreq: check the frequency affect the performances
Makefile | 6 ++ cpufreq/test_01.sh | 43 ++++++++++ cpufreq/test_02.sh | 43 ++++++++++ cpufreq/test_03.sh | 64 ++++++++++++++ cpufreq/test_04.sh | 85 +++++++++++++++++++ cpufreq/test_05.sh | 145 ++++++++++++++++++++++++++++++++ cpufreq/test_06.sh | 236 ++++++++++++++++++++++++++++++++++++++++++++++++++++ utils/Makefile | 11 +++ utils/cpucycle.c | 102 ++++++++++++++++++++++ utils/nanosleep.c | 45 ++++++++++ 10 files changed, 780 insertions(+), 0 deletions(-) create mode 100644 cpufreq/test_01.sh create mode 100644 cpufreq/test_02.sh create mode 100644 cpufreq/test_03.sh create mode 100644 cpufreq/test_04.sh create mode 100644 cpufreq/test_05.sh create mode 100644 cpufreq/test_06.sh create mode 100644 utils/Makefile create mode 100644 utils/cpucycle.c create mode 100644 utils/nanosleep.c
Daniel,
The scripts themselves look ok and the documentation is excellent.
Thanks
But I'm in
two minds regarding the location of the documentation though.
While having it on a wiki in the current form allows people w/o access to the code to browse through the docs, it means that the 'comments' are at a different place from the code.
I though, once commited, the patches to add a link for each test description in the wiki page to the corresponding file in the git pm-qa repo.
But as long as you're confident this works for you, I'm ok with it.
Once youv've published the tree, please get in touch with Paul Larson to start pulling these scripts into LAVA.
Sure.
I'd like to see what the reports would look like.
I added in attachment the result of these scripts on a fully working cpufreq framework on my intel box. That will show the ouput of the tests.
But the cpufreq is not complete on a pandaboard, so results won't be really nice.
- -- http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro Facebook | http://twitter.com/#!/linaroorg Twitter | http://www.linaro.org/linaro-blog/ Blog
On Thu, Jun 30, 2011 at 10:44 AM, Daniel Lezcano daniel.lezcano@linaro.orgwrote:
I added in attachment the result of these scripts on a fully working cpufreq framework on my intel box. That will show the ouput of the tests.
But the cpufreq is not complete on a pandaboard, so results won't be really nice.
Looking at your results answered some of my questions at least, it seems to
have a very different format for the output than the previous tests. Are the older ones replaced by this, extended by it, or is this just a completely separate testsuite? If it's to be considered a different testsuite, that makes some sense, as the type of tests here seem to be consistent pass/fail sorts of tests. However I have a few concerns for having it be easily parsed into results that can be stored automatically:
for 'cpu0':
checking scaling_available_governors file presence ... PASS checking scaling_governor file presence ... PASS for 'cpu1': checking scaling_available_governors file presence ... PASS checking scaling_governor file presence ... PASS
... Heading1 test_id1 test_id2 Heading2 test_id1 test_id2 This is notoriously a bit tricky to deal with. It can be done, but the parsing has to track which heading it's under, and modify the test_id (or some attribute of it) to designate how it differs from other testcases with the exact same name. It can be done, but since you have complete control over how you output results, it can easily be changed in such a way that is easy to parse, and easy for a human to look at. What might be easier is: cpu0_scaling_available_governors_file_exists: PASS cpu0_scaling_governor_file_exists: PASS cpu1_scaling_available_governors_file_exists: PASS cpu1_scaling_governor_file_exists: PASS ...
Another thing that I'm curious about here is...
saving governor for cpu0 ... DONE
Is that a result? Or just an informational message? That's not clear, even as a human reader.
deviation 0 % for 2333000 is ... VERY
GOOD
Same comments as above about having an easier to interpret format, but the result here: "VERY GOOD" - what does that mean? What are the other possible values? Is this simply another way of saying "PASS"? Or should it actually be a measurement reported here?
Thanks, Paul Larson
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 06/30/2011 01:04 PM, Paul Larson wrote:
On Thu, Jun 30, 2011 at 10:44 AM, Daniel Lezcano daniel.lezcano@linaro.orgwrote:
I added in attachment the result of these scripts on a fully working cpufreq framework on my intel box. That will show the ouput of the tests.
But the cpufreq is not complete on a pandaboard, so results won't be really nice.
Looking at your results answered some of my questions at least, it seems to
have a very different format for the output than the previous tests. Are the older ones replaced by this, extended by it, or is this just a completely separate testsuite? If it's to be considered a different testsuite, that makes some sense, as the type of tests here seem to be consistent pass/fail sorts of tests. However I have a few concerns for having it be easily parsed into results that can be stored automatically:
It is the same test suite but I want the new tests to replace the old ones in a near future.
At present, new and old tests are co-existing.
The test suite is launched in two different manners:
* running the old tests -> no modification * the new way, invoked by 'make check'
Today, you should not have to modify anything as lava should invoke the old way pm-qa tests.
When all tests will be finished I wish to switch the new way test suite execution in lava, if it is possible.
So in the meantime, lava can continue to run the old tests while the developers can easily invoke the new tests with 'make check' and check their kernel code each time new tests are committed.
Does it make sense ?
for 'cpu0':
checking scaling_available_governors file presence ... PASS checking scaling_governor file presence ... PASS for 'cpu1': checking scaling_available_governors file presence ... PASS checking scaling_governor file presence ... PASS
... Heading1 test_id1 test_id2 Heading2 test_id1 test_id2 This is notoriously a bit tricky to deal with. It can be done, but the parsing has to track which heading it's under, and modify the test_id (or some attribute of it) to designate how it differs from other testcases with the exact same name. It can be done, but since you have complete control over how you output results, it can easily be changed in such a way that is easy to parse, and easy for a human to look at. What might be easier is: cpu0_scaling_available_governors_file_exists: PASS cpu0_scaling_governor_file_exists: PASS cpu1_scaling_available_governors_file_exists: PASS cpu1_scaling_governor_file_exists: PASS
Ok I can get rid of the nested format. No problem.
Each scripts are doing several tests, IMO, that would be better to show the description of what is doing the script and finish them by PASS or FAIL.
Will the following format be ok ?
test_01/cpu0 : checking scaling_available_frequencies file ... PASS test_01/cpu0 : checking scaling_cur_freq file ... PASS test_01/cpu0 : checking scaling_setspeed file ... PASS test_01/cpu1 : checking scaling_available_frequencies file ... PASS test_01/cpu1 : checking scaling_cur_freq file ... PASS test_01/cpu1 : checking scaling_setspeed file ... PASS
All the tests are described at :
https://wiki.linaro.org/WorkingGroups/PowerManagement/Doc/QA/Scripts
Another thing that I'm curious about here is...
saving governor for cpu0 ... DONE
Is that a result? Or just an informational message? That's not clear, even as a human reader.
The result for a test case is PASS or FAIL.
But under some circumstances, we need to do some extra work where a failure does not mean the test case failed but the pre-requisite for the test case is not met.
For example, the test case is to change the governor to 'userspace'. We have to be 'root' to do such operation. If the test script is run without 'root' privileges then the prerequisite is not met and the test script fails, not the test case.
But anyway, I can log to a file the operations not related to the test case and just display PASS or FAIL as a result. It will be up to the user to look at the log file to understand the problem.
deviation 0 % for 2333000 is ... VERY
GOOD
Same comments as above about having an easier to interpret format, but the result here: "VERY GOOD" - what does that mean? What are the other possible values? Is this simply another way of saying "PASS"? Or should it actually be a measurement reported here?
Yep, I agree it is an informational message and should go to a logging file. I will stick to a simple result 'PASS' or 'FAIL' and I will let the user to read the documentation of the test in the wiki page to understand the meaning of these messages (GOOD, VERY GOOD...).
eg. for this one:
https://wiki.linaro.org/WorkingGroups/PowerManagement/Doc/QA/Scripts#test_06
Thanks -- Daniel
- -- http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro Facebook | http://twitter.com/#!/linaroorg Twitter | http://www.linaro.org/linaro-blog/ Blog
On Thu, Jun 30, 2011 at 11:24 PM, Daniel Lezcano daniel.lezcano@linaro.orgwrote:
When all tests will be finished I wish to switch the new way test suite execution in lava, if it is possible.
Given that the old tests are broken at the moment and disabled, any reason we shouldn't switchover now?
Will the following format be ok ?
test_01/cpu0 : checking scaling_available_frequencies file ... PASS ...
That would probably translate internally to something like: {test_id="test_01_cpu0", message="checking scaling_available_frequencies file", result="PASS"} Is that ok? Seems like something we should have no trouble making sense out of later I think. Also, the exact output is saved as an attachment as well.
The result for a test case is PASS or FAIL.
We support "unknown" as a result as well, if that helps you at all.
Sometimes results can be indeterminate, or unimportant (odd as that may sound, sometimes the message is really what you're after, as illustrated by some of the previous pm qa tests)
But under some circumstances, we need to do some extra work where a
failure does not mean the test case failed but the pre-requisite for the test case is not met.
...and we also support "skip" as a result. That seems like a correct use for it. You don't have to report it literally as "skip", you can call it "oink" for all we care, and just provide a translation table that converts whatever your result strings are to {pass, fail, skip, unknown}. For example, see the test definition for ltp.
deviation 0 % for 2333000 is ... VERY
GOOD
Same comments as above about having an easier to interpret format, but
the
result here: "VERY GOOD" - what does that mean? What are the other
possible
values? Is this simply another way of saying "PASS"? Or should it
actually
be a measurement reported here?
Yep, I agree it is an informational message and should go to a logging file. I will stick to a simple result 'PASS' or 'FAIL' and I will let the user to read the documentation of the test in the wiki page to understand the meaning of these messages (GOOD, VERY GOOD...).
keeping to the template you mentioned earlier, I wonder if we could do something like this: deviation_0_for_2333000: VERY GOOD ... PASS (are the numbers there consistent and useful as a test case name? I'm assuming so here) That would allow you to capture "VERY GOOD" as details in the message (one of the fields we keep). Also, your test could be smart enough to know that good, or verygood = pass, while bad, verybad = fail. Possibly I'm making a lot of assumptions here, but I think you see what I mean.
Keeping to a consistent results format in your output is a good practice, and makes this *much* easier for capturing the data important to you. Of course, anything is possible. We have some tests with rather elaborate results and for those, the test definition just inherits from a base class and defines it's own parser. If you're not feeling very pythonic, you could provide your own parser as part of the test download written in shell, c, ruby, go, whatever, that just acts as a filter, then have it just read it all in directly. There are lots of options, but I'm of the opinion that a consistent format makes it easier on humans looking at it as much as machine parsers. And since you have control over that, easiest to do it now. :)
Thanks, Paul Larson
On 11 Jul 01, Paul Larson wrote:
On Thu, Jun 30, 2011 at 11:24 PM, Daniel Lezcano daniel.lezcano@linaro.orgwrote:
When all tests will be finished I wish to switch the new way test suite execution in lava, if it is possible.
Given that the old tests are broken at the moment and disabled, any reason we shouldn't switchover now?
I think we should switchover now. There is no real dependency on the old tests.
Will the following format be ok ?
test_01/cpu0 : checking scaling_available_frequencies file ... PASS ...
That would probably translate internally to something like: {test_id="test_01_cpu0", message="checking scaling_available_frequencies file", result="PASS"} Is that ok? Seems like something we should have no trouble making sense out of later I think. Also, the exact output is saved as an attachment as well.
Does LAVA send email in case of a FAIL result?
The result for a test case is PASS or FAIL.
We support "unknown" as a result as well, if that helps you at all.
Sometimes results can be indeterminate, or unimportant (odd as that may sound, sometimes the message is really what you're after, as illustrated by some of the previous pm qa tests)
But under some circumstances, we need to do some extra work where a
failure does not mean the test case failed but the pre-requisite for the test case is not met.
...and we also support "skip" as a result. That seems like a correct use for it. You don't have to report it literally as "skip", you can call it "oink" for all we care, and just provide a translation table that converts whatever your result strings are to {pass, fail, skip, unknown}. For example, see the test definition for ltp.
deviation 0 % for 2333000 is ... VERY
GOOD
Same comments as above about having an easier to interpret format, but
the
result here: "VERY GOOD" - what does that mean? What are the other
possible
values? Is this simply another way of saying "PASS"? Or should it
actually
be a measurement reported here?
Yep, I agree it is an informational message and should go to a logging file. I will stick to a simple result 'PASS' or 'FAIL' and I will let the user to read the documentation of the test in the wiki page to understand the meaning of these messages (GOOD, VERY GOOD...).
keeping to the template you mentioned earlier, I wonder if we could do something like this: deviation_0_for_2333000: VERY GOOD ... PASS (are the numbers there consistent and useful as a test case name? I'm assuming so here) That would allow you to capture "VERY GOOD" as details in the message (one of the fields we keep). Also, your test could be smart enough to know that good, or verygood = pass, while bad, verybad = fail. Possibly I'm making a lot of assumptions here, but I think you see what I mean.
Yes, it makes sense to make GOOD, VERY GOOD, etc. informational (as part of the messages) and map them to PASS.
Keeping to a consistent results format in your output is a good practice, and makes this *much* easier for capturing the data important to you. Of course, anything is possible. We have some tests with rather elaborate results and for those, the test definition just inherits from a base class and defines it's own parser. If you're not feeling very pythonic, you could provide your own parser as part of the test download written in shell, c, ruby, go, whatever, that just acts as a filter, then have it just read it all in directly. There are lots of options, but I'm of the opinion that a consistent format makes it easier on humans looking at it as much as machine parsers. And since you have control over that, easiest to do it now. :)
Thanks, Paul Larson
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 07/01/2011 02:24 AM, Paul Larson wrote:
On Thu, Jun 30, 2011 at 11:24 PM, Daniel Lezcano daniel.lezcano@linaro.orgwrote:
When all tests will be finished I wish to switch the new way test suite execution in lava, if it is possible.
Given that the old tests are broken at the moment and disabled, any reason we shouldn't switchover now?
Ok, let's switch. Expect the messages output I have to change, is there something I should change to integrate with lava (eg. for the 'make check' invocation) ?
Will the following format be ok ?
test_01/cpu0 : checking scaling_available_frequencies file ... PASS ...
That would probably translate internally to something like: {test_id="test_01_cpu0", message="checking scaling_available_frequencies file", result="PASS"} Is that ok? Seems like something we should have no trouble making sense out of later I think. Also, the exact output is saved as an attachment as well.
That sounds good.
The result for a test case is PASS or FAIL.
We support "unknown" as a result as well, if that helps you at all.
Ok.
Sometimes results can be indeterminate, or unimportant (odd as that may sound, sometimes the message is really what you're after, as illustrated by some of the previous pm qa tests)
But under some circumstances, we need to do some extra work where a
failure does not mean the test case failed but the pre-requisite for the test case is not met.
...and we also support "skip" as a result. That seems like a correct use for it.
Ok.
You don't have to report it literally as "skip", you can call it "oink" for all we care, and just provide a translation table that converts whatever your result strings are to {pass, fail, skip, unknown}. For example, see the test definition for ltp.
deviation 0 % for 2333000 is ... VERY
GOOD
Same comments as above about having an easier to interpret format, but
the
result here: "VERY GOOD" - what does that mean? What are the other
possible
values? Is this simply another way of saying "PASS"? Or should it
actually
be a measurement reported here?
Yep, I agree it is an informational message and should go to a logging file. I will stick to a simple result 'PASS' or 'FAIL' and I will let the user to read the documentation of the test in the wiki page to understand the meaning of these messages (GOOD, VERY GOOD...).
keeping to the template you mentioned earlier, I wonder if we could do something like this: deviation_0_for_2333000: VERY GOOD ... PASS (are the numbers there consistent and useful as a test case name? I'm assuming so here) That would allow you to capture "VERY GOOD" as details in the message (one of the fields we keep). Also, your test could be smart enough to know that good, or verygood = pass, while bad, verybad = fail. Possibly I'm making a lot of assumptions here, but I think you see what I mean.
Yes, I think you are right. It should be better presented like that.
Keeping to a consistent results format in your output is a good practice, and makes this *much* easier for capturing the data important to you. Of course, anything is possible. We have some tests with rather elaborate results and for those, the test definition just inherits from a base class and defines it's own parser. If you're not feeling very pythonic, you could provide your own parser as part of the test download written in shell, c, ruby, go, whatever, that just acts as a filter, then have it just read it all in directly. There are lots of options, but I'm of the opinion that a consistent format makes it easier on humans looking at it as much as machine parsers. And since you have control over that, easiest to do it now. :)
Agree :)
I will rework the tests and resend.
Thanks Paul
-- Daniel
- -- http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro Facebook | http://twitter.com/#!/linaroorg Twitter | http://www.linaro.org/linaro-blog/ Blog
On Fri, Jul 1, 2011 at 9:08 AM, Daniel Lezcano daniel.lezcano@linaro.orgwrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 07/01/2011 02:24 AM, Paul Larson wrote:
On Thu, Jun 30, 2011 at 11:24 PM, Daniel Lezcano daniel.lezcano@linaro.orgwrote:
When all tests will be finished I wish to switch the new way test suite execution in lava, if it is possible.
Given that the old tests are broken at the moment and disabled, any
reason
we shouldn't switchover now?
Ok, let's switch. Expect the messages output I have to change, is there something I should change to integrate with lava (eg. for the 'make check' invocation) ?
Nope, that should be just fine. Point me at a version when you make those
changes, and I'll make a test definition for it in lava-test and we can try it out. Thanks, Paul Larson
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 07/01/2011 04:28 PM, Paul Larson wrote:
On Fri, Jul 1, 2011 at 9:08 AM, Daniel Lezcano daniel.lezcano@linaro.orgwrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 07/01/2011 02:24 AM, Paul Larson wrote:
On Thu, Jun 30, 2011 at 11:24 PM, Daniel Lezcano daniel.lezcano@linaro.orgwrote:
When all tests will be finished I wish to switch the new way test suite execution in lava, if it is possible.
Given that the old tests are broken at the moment and disabled, any
reason
we shouldn't switchover now?
Ok, let's switch. Expect the messages output I have to change, is there something I should change to integrate with lava (eg. for the 'make check' invocation) ?
Nope, that should be just fine. Point me at a version when you make those changes, and I'll make a test definition for it in lava-test and we can try it out.
Ok, great.
Thanks
-- Daniel
- -- http://www.linaro.org/ Linaro.org ? Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro Facebook | http://twitter.com/#!/linaroorg Twitter | http://www.linaro.org/linaro-blog/ Blog
On Wed, Jun 29, 2011 at 11:35 PM, Daniel Lezcano daniel.lezcano@linaro.orgwrote:
These tests are used to test the cpufreq driver on ARM architecture. As the cpufreq is not yet complete, the test suite is based on the cpufreq sysfs API exported on intel architecture, assuming it is consistent across architecture.
Hi Daniel, are these built on top of the previous pmqa testsuite so that
they will work automatically? Or do we need to make updates to the test definition in lava-test? As long as they are going into the same git tree, and don't do anything that changes the way the results were parsed, they should be fine, but I wanted to make sure.
Thanks, Paul Larson