Home   Help Search Login Register  

Author Topic: Benchmarking functions  (Read 2149 times)

0 Members and 1 Guest are viewing this topic.

Offline Spooner

  • Members
  • *
  • Mostly useless
    • Community Base Addons
Benchmarking functions
« on: 24 Mar 2009, 23:03:31 »
Having discussed what particular algorithms are better or worse than others, I made a benchmark function to, I hope, measure run-times. I'm not at all sure about the accuracy of this function, but I'd be interested in anyone else's input:
Code: (benchmark.sqf) [Select]
private ["_function", "_params", "_iterations", "_repeats", "_start"];

_function = _this select 0; // Function to test
_params = _this select 1; // Parameters to send to the function each time
_iterations = _this select 2; // Number of iterations to run without pausing.
_repeats = _this select 3; // Number of times to repeat the test.

_start = time;

for "_i" from 1 to _repeats do
{
for "_j" from 1 to _iterations do
{
_params call _function;
};

sleep 0.001;
};

(time - _start) / (_iterations * _repeats);
This should be more accurate a measurement, the larger the number of _iterations and the higher the regular frame-rate (so run it on Rahmadi with low settings). If your game doesn't freeze completely for at least a couple of seconds, increase the number of iterations until it does, otherwise the graphics/game engine will be supplying a significant amount of time to that of your function. If the functions you want to test take a long time, reduce the number of iterations before you start, otherwise you could be there all day...
Code: (Example test: push vs concatenate) [Select]
#define ITERATIONS 5000
#define REPEATS 3

private ["_benchmark", "_array", "_timeNOP", "_timePush", "_timeCat", "_timePush2", "_timeCat2"];

sleep 5; // Allow for system init.

// Baseline for doing nothing.
_array = [];
_timeNOP = [{ }, [], ITERATIONS, REPEATS] call _benchmark;

// Push one element onto array.
sleep 1;
_array = [];
_timePush = [{ _array set [count _array, 1] }, [], ITERATIONS, REPEATS] call _benchmark;

// Concatenate one element onto an array.
sleep 1;
_array = [];
_timeCat = [{ _array = _array + [1] }, [], ITERATIONS, REPEATS] call _benchmark;

hint format ["NOP: %1ms\nConcatenate: %2ms\nPush: %3ms",
_timeNOP * 1000,
(_timeCat - _timeNOP) * 1000,
(_timePush - _timeNOP) * 1000];
Incidentally, in my tests, push one element (_a set [count _a,_b]) takes a lot less that concatenate one value (_a = _a + [_b]) (using 2.5GHz quad core):
* 1000 iterations (x3) takes 1us and 6.7us per operation, respectively
* 5000 iterations (x3) takes 0.5us and 25.5us per operation, respectively
* 10000 iterations (x3) takes 0.4us and 17.6us per operation, respectively
Push is clearly much faster based on these tests, but I'm not sure if the ratio is at all accurate...

Anyone see problems with this benchmark system? I'm sure I may have ignored an important factor...

EDIT: Please don't discuss whether push or concatenate is faster, or should be faster, here - I'm just giving that as an example to test benchmarking effectiveness.

EDIT2: Oops, marked timings as ns (nanoseconds) when I meant us (microseconds).
« Last Edit: 26 Mar 2009, 15:32:53 by Spooner »
[Arma 2] CBA: Community Base Addons
[Arma 1] SPON Core (including links to my other scripts)

Offline Rommel92

  • Members
  • *
Re: Benchmarking functions
« Reply #1 on: 28 Mar 2009, 04:58:19 »
In regards to the discussion of how to improve the script, I've tried many variations, with no avail unless we have access to system time. Read below for more information.

Quote from: Report
My results were not concordant, nor effective. In-game time is too reliant on FPS and hence takes an average. If you get < 20FPS, your time can effectively 'speed' up to unrealistic values.

As the low FPS increases time speed and hence a greater delta time; no measurable results can be made unless you can access the system clock.

This script will be useful for determining constants if you set a maximum system 'effect' score; and then ensuring script packs or missions don't exhaust the system by going over that constant.  ;)

Quote from: 2nd. Report
Code: (benchmark) [Select]
private ["_function", "_params", "_iterations", "_repeats", "_start"];

_function = {};
_params = [];
_iterations = 5000;
_repeats = 3;
_pause =<Variable>

_start = time;

for "_i" from 1 to _repeats do
{
for "_j" from 1 to _iterations do
{
_params call _function;
};
sleep _pause;
};

timeDta = time - _start;

Frames, Time (ms), Min, Max, Avg
    62,      3456,   0,  38, 17.940

_pause = 0.000001

Frames, Time (ms), Min, Max, Avg
    61,      3422,   0,  40, 17.826

_pause = 0.0001

Frames, Time (ms), Min, Max, Avg
    71,      3572,   0,  45, 19.877

_pause = 0.01

Frames, Time (ms), Min, Max, Avg
   126,      4491,   0,  44, 28.056

_pause = 1.0
« Last Edit: 28 Mar 2009, 09:13:02 by Rommel92 »

Offline i0n0s

  • Former Staff
  • ****
Re: Benchmarking functions
« Reply #2 on: 28 Mar 2009, 09:45:40 »
Wouldn't it be useful at that time to use getTicks from ArmALib?

Offline Spooner

  • Members
  • *
  • Mostly useless
    • Community Base Addons
Re: Benchmarking functions
« Reply #3 on: 29 Mar 2009, 03:51:22 »
Well, i realised there were inaccuracies in the measurement, which is why I was always comparing one function with another and those values with the value taken with that of performing a null function. Certainly, the results are completely meaningless when not compared with another result.

The intention of the test is to actually reduce the machine to 0FPS, at least temporarily, to reduce the effect of graphics rendering.

Still, ArmAlib getTicks does seem to be the way to move forward (never thought of that, but most people don't use ArmAlib anyway).
[Arma 2] CBA: Community Base Addons
[Arma 1] SPON Core (including links to my other scripts)

Offline Rommel92

  • Members
  • *
Re: Benchmarking functions
« Reply #4 on: 12 Apr 2009, 04:35:05 »
Out of curiosity, as in a few of my scripts I use them heavily, the differences between the old and new commands/methods in ArmA/OFP etc.

note: I used setviewdistance 1 to limit graphics rendering to a minimum (plus all settings on low)

Here are the results for a few of them.

Code: [Select]
All the operators, with the fastest two being (all others were between 0.055 and 0.059), <= : >=
0.053: 0.053
This was done multiple times with variances in order of execution to eliminate random error, constant results were obtained.

commentedcode : uncommented
0.054 : 0.036

isServer : Local Server
0.0543 : 0.0701

isnull player : Local player
0.0709 : 0.0709

position : getpos
0.0871 : 0.0859

damage : getdammage
0.0709 : 0.0709

waituntil loops : while loops : foreach loops : for loops
0.0704 : 8.14 (Complete freeze) : 0.0804 : 0.087 

Code: (loops) [Select]
Last set:

[{waituntil{time>0}}, [], 15000, 3, 0.01] call _benchmark;
[{while{time=0}do{}}, [], 15000, 3, 0.01] call _benchmark;
[{{}foreach[0]}, [], 15000, 3, 0.01] call _benchmark;
[{for "_z" from 1 to 2 do{}}, [], 15000, 3, 0.01] call _benchmark;

Code: (benchmark.sqf) [Select]
private ["_function", "_params", "_iterations", "_repeats", "_start"];

_function = _this select 0; // Function to test
_params = _this select 1; // Parameters to send to the function each time
_iterations = _this select 2; // Number of iterations to run without pausing.
_repeats = _this select 3; // Number of times to repeat the test.
_pause = _this select 4; // Time delay between Repeats

_start = time;

for "_i" from 1 to _repeats do
{
for "_j" from 1 to _iterations do
{
_params call _function;
};
sleep _pause;
};

timeDta = timeDta + [(time - _start) - (_pause * _repeats)]; //This is where time data is from.
« Last Edit: 12 Apr 2009, 04:56:15 by Rommel92 »

Offline Spooner

  • Members
  • *
  • Mostly useless
    • Community Base Addons
Re: Benchmarking functions
« Reply #5 on: 12 Apr 2009, 16:19:17 »
Interesting results there, but remember that absolute timings are mostly irrelevant (except when those commands are run a lot, such as in high-repetition loops).

Your loop tests are not really comparable, since you are comparing apples with oranges. You also use time=0 as a condition (when it will just make the loop not run, since time=0 sets the value of time and returns nil, rather than comparing time to zero, time==0. In either case, though, this is a continuous loop and thus is not benchmarkable). You also can't really benchmark anything with a delay in it (waitUntil or while with a sleep in it) since then most of the time it takes is pausing and thus it is totally dependent on the frame rate).
Code: (equivalent loops) [Select]
{ for "_i" from 0 to (1000 - 1) do { } }
{ private "_i"; for [ { _i = 0 }, { _i < 1000 }, { _i = _i + 1 }] do {} }
{ private "_i"; _i = 0; while { _i < 1000 } do { _i = _i + 1 } }
In case you weren't aware, the first loop form includes a "free" private statement for the iterator.

Also, I'm not convinced that your changes to the benchmark are valid. There is absolutely no reason to increase or decrease the delay value from 0.001 (this value has been picked for a very specific reason) and increasing this number will actually just make the test less accurate. Also, I specifically don't remove the time paused from the total, since that also is left in for a good reason (although you would see more odd results if you don't remove it, the larger you made the pause value).
[Arma 2] CBA: Community Base Addons
[Arma 1] SPON Core (including links to my other scripts)

Offline Rommel92

  • Members
  • *
Re: Benchmarking functions
« Reply #6 on: 13 Apr 2009, 04:35:09 »
I knew they were completely irrelevant in their comparisons, thanks for realizing the mistake with the while statement.

I am aware that there was a private statement in the loop and I simply did not change it.
I forgot to mention these results need heavy criticism (as is given :P) and that I was very skeptical about the loops comparison. I was also messing about with the code for a while, and I'll try out the original code in a moment to run the same tests.

Using the original post' benchmarking script, and the following code:
Code: [Select]
[{waituntil{cond=true; cond}}, [], 15000, 3] call _benchmark;sleep 1;
[{cond=true;while{cond}do{cond=false}}, [], 15000, 3] call _benchmark;sleep 1;
[{{}foreach[0]}, [], 15000, 3] call _benchmark;sleep 1;
[{for _z from 0 to 1 do{}}, [], 15000, 3] call _benchmark;sleep 1;
[{getpos player}, [], 15000, 3] call _benchmark;sleep 1;
[{position player}, [], 15000, 3] call _benchmark;sleep 1;
[{getposASL player}, [], 15000, 3] call _benchmark;sleep 1;
[{damage player}, [], 15000, 3] call _benchmark;sleep 1;
[{getdammage player}, [], 15000, 3] call _benchmark;sleep 1;
[{var=isserver}, [], 15000, 3] call _benchmark;sleep 1;
[{var=local server}, [], 15000, 3] call _benchmark;

note for discussion: I am very aware of the excess variable declarations in the while statement function, and that may be reason for its slower results, but it still shows it could be done in lower code with the wait until (an average of 1/5th faster); and that the variable declarations must be taken into account for some of the other statements also.

And the results were respectively:
Code: (Test 1) [Select]
0.002600
0.003355
0.002244
0.001866
0.002222
0.002244
0.002222
0.001866
0.001844
0.001866
0.002244
Code: (Test 2) [Select]
0.003955
0.004466
0.003733
0.001866
0.002222
0.002222
0.002222
0.001866
0.001844
0.001866
0.002244
Code: (Test 3) [Select]
0.002577
0.002955
0.002222
0.001866
0.002222
0.002223
0.002222
0.001866
0.001844
0.001844
0.002223
« Last Edit: 13 Apr 2009, 05:00:12 by Rommel92 »