Home   Help Search Login Register  

Author Topic: Speeding scripts up  (Read 9578 times)

0 Members and 1 Guest are viewing this topic.

Offline THobson

  • OFPEC Patron
  • Former Staff
  • ****
Re:Speeding scripts up
« Reply #45 on: 14 Oct 2005, 20:24:21 »
It really depends on the complexity of the condition

@ simpleCondition

will perform much better than

@ complexCondtion

You really need to test it.  I found that

@ (getDammage unit > xx)

was much better the a loop with a ~0.5 delay, but if was:

@ ((getDammage unit > xx) and (some expression) or (someother expression))

I would expect the @ to take a great hit than a loop


Offline Dinger

  • Contributing Member
  • **
  • where's the ultra-theoretical mega-scripting forum
Re:Speeding scripts up
« Reply #46 on: 15 Oct 2005, 22:03:46 »
Mikero -- find me cases of "Lag" being used to refer to FPS chunders before Multiplayer Internet "Lag" showed up. I'm sorry, but logically, an FPS chunder has nothing to do with Lag. You get controller Lag, when a system is swamped so that response times slow down, but that's different from "Lag" as FPS problems. Precise terminology makes for clear discussions, where the problems and the underlying causes are clear. Otherwise we chase each other in circles.
Dinger/Cfit

Offline Nemesis6

  • Members
  • *
Re:Speeding scripts up
« Reply #47 on: 18 Oct 2005, 01:51:33 »
What about scripts that don't have any loops or #s? Same deal with them or is it unneccesary at that point?
« Last Edit: 18 Oct 2005, 01:51:49 by Nemesis6 »
I am actually flying into a star... this is incredible!

Offline Mikero

  • Former Staff
  • ****
  • ook?
    • Linux Step by Step
Re:Speeding scripts up
« Reply #48 on: 18 Oct 2005, 03:33:49 »
Unecessary. The priniciple is, anything that would cause the engine to scan the script again from the beginning promotes lag. #labels are prime candidates and in your case
<start of file>
goto to init

actually causes the engine to re-read that line, only to find it wasn't a label !!
Just say no to bugz

Offline benreeper

  • Members
  • *
  • I'm a llama!
Re:Speeding scripts up
« Reply #49 on: 03 Dec 2005, 17:41:37 »
In a huge "if-then" switch block, is it quicker not to use a "goto" as break .

E.G.

this:
if (boola) then {do this; goto "end"}
if (boolb) then {do this; goto "end"}
if (boolc) then {do this; goto "end"}

as opposed
to this:
if (boola) then {do this}
if (boolb) then {do this}
if (boolc) then {do this}

--Ben

Offline Mikero

  • Former Staff
  • ****
  • ook?
    • Linux Step by Step
Re:Speeding scripts up
« Reply #50 on: 04 Dec 2005, 06:53:28 »
>in a huge if then

would you believe, that's perverse! The SMALLER the if switch block, the better it would be to NOT use a goto.

the larger the switch testing, the more time spent decoding the if (bool). The reason is the # moniker on the label, the engine iterates the string table quickly looking for that unique identifier in first char position of any 'line'. But as a  principle, you're right, because this switch block could be buried very deep down a 1000 line script, for the 10 odd if's (a very large switch), i reckon testing bools and failing would be faster than scanning from the top.


« Last Edit: 04 Dec 2005, 06:56:06 by Mikero »
Just say no to bugz

Offline benreeper

  • Members
  • *
  • I'm a llama!
Re:Speeding scripts up
« Reply #51 on: 04 Dec 2005, 17:25:11 »
Gotcha.
--Ben

Offline hardrock

  • Members
  • *
  • Title? What's that?
    • Operation FlightSim
Re:Speeding scripts up
« Reply #52 on: 09 Dec 2005, 12:27:51 »
I just investigated a bit further for the question "@ vs. loop"

For that I used a simple boolean as condition, and two different scripts to check. One checked the condition using @COND, the other one with a loop with 0.5 seconds delay and an if statemtent (?!COND : goto ...).

Each test run first started the tested script 5000 times in a while loop (to have them all start at once), waited a random number and changed the condition. I measured the time it took to recognise the condition for each single script and took the average of it. And to make even more sure, I did this 3 times and took the average of the 3 average values.

Here are my test results:


using loops with a delay of 0.5 seconds

• fps
Normal (Desert Island): 54
Script startup: > 16
Scripts running: ~42

• activation time
The average time for the 0.5 sec. loop to recognise the changed condition was 0.212, i.e. approximately the half.


using loops with a delay of 0.25 seconds

• fps
Normal (Desert Island): 54
Script startup: > 16
Scripts running: ~27

• activation time
The average time for the 0.25 sec. loop to recognise the changed condition was 0.091


using loops with a delay of 0.1 seconds

• fps
Normal (Desert Island): 54
Script startup: > 16
Scripts running: ~6

• activation time
The average time for the 0.1 sec. loop to recognise the changed condition was 0


using @

• fps
Normal (Desert Island): 54
Script startup: > 16
Scripts running: ~7

• activation time
The average time for the @ command to recognise the changed condition was exactly 0.


Conclusion:

'@' is equal to a loop with a delay of 0.1 seconds. They both eat a lot of performance, but are the absolutely fastest to get activated. So if something needs to be very precise, the best way is to use @. Avoid it in scripts running parallelly though.

Everything else is better put into loops of delays of 0.25 seconds or bigger (every tenth second more means increase of performance), even if it's the simplest condition, if you run the script over a long time or multiple of the same script parallelly. In the latter case it would even be good to randomise the delay first, keeping a certain minimum.

e.g.:
Code: [Select]
;;; ~ 0.6
_r = 0.5 + (random 0.2)
~_r

Avoid writing ~(random 0.6), as this may have the effect, that random 0.6 is calculated every frame. I am not sure about this though.
« Last Edit: 09 Dec 2005, 12:32:56 by hardrock »

Offline Flauta

  • Members
  • *
  • There is no knownledge, that is no power
    • Chek mi FLog
Re:Speeding scripts up
« Reply #53 on: 16 Dec 2005, 08:24:07 »
mh.. sorry i havent finsish reading the thread.. but what happens  if we have for exaple 3 scripts:

Code: [Select]
//conditional.sqs
[] exec condA.sqs
[] exec condB.sqs
? condA : activationA = true
? condB : ActivationB = true

exit
Code: [Select]
//condA.sqs
@ ActivationA
xxxxx
xxxxx
xxxxx

exit
Code: [Select]
//condB.sqs
@ ActivationB
xxxxx
xxxxx
xxxxx

exit
this must be obiusly more coplex.. if it is simple like that, there are better ways to Script it...

but isn't this a "fast" alternative way?!?!?!


EDIT: (some "toEnglish" traslation threads fixed..)
« Last Edit: 16 Dec 2005, 08:26:14 by Flauta »

Offline hardrock

  • Members
  • *
  • Title? What's that?
    • Operation FlightSim
Re:Speeding scripts up
« Reply #54 on: 16 Dec 2005, 12:28:21 »
Well, you better just write

Quote
//conditional.sqs
? condA : [] exec "condA.sqs"
? condB : [] exec "condB.sqs"

exit

in your example, as it's got the same effects without using global variables.

Offline Flauta

  • Members
  • *
  • There is no knownledge, that is no power
    • Chek mi FLog
Re:Speeding scripts up
« Reply #55 on: 16 Dec 2005, 18:41:11 »
but whit my way to make te script.. you are loading woole scripts from the begining.. so te CPU "reads" the scripts once... not everytime it neeeds to execute one..  and note that I tell that is more complex... I thought on that too!! ;D

maybe is faster.. maybe is slower because the both @... that is what im asking.. how faster is it than the hardrock's way?

Offline hardrock

  • Members
  • *
  • Title? What's that?
    • Operation FlightSim
Re:Speeding scripts up
« Reply #56 on: 16 Dec 2005, 22:36:57 »
maybe is faster.. maybe is slower because the both @... that is what im asking.. how faster is it than the hardrock's way?
I think it's better to load the scripts when needed. Looking at my test results above, you can see that the @ command indeed needs a lot of performance, and I think loading one script is less work for the cpu than checking ten conditions in ten waiting scripts every frame.

But you're mentioning an interesting point. Loading a script is indeed a lot of work too, and above it was stated that a script isn't deleted from memory unless you use the exit command in it.

So, at least for non-looping scripts which have to be called several times in the mission, there'd be the way of preloading them via init.sqs.

You'd need the line
Code: [Select]
? (time<1) : goto "end"and the label #end on the end of every script. I know that this isn't that well for the CPU as the engine has to search the label "end" from the beginning of the script, but let's just assume that doesn't matter in the first second of the mission. After that label, their should be the end of the script, plain without exit command.

Then you'd write in your init.sqs
Code: [Select]
;; load scripts
[] exec ...
[] exec ...

@(time>1)

;; do the rest

This way every non-looping script would be preloaded in the first second of the mission and would be available for the rest of the mission without the need to load it.

That's just hypothesis and I never tested it, but it would be interesting what you think about it.
« Last Edit: 16 Dec 2005, 22:38:07 by hardrock »

Offline Mikero

  • Former Staff
  • ****
  • ook?
    • Linux Step by Step
Re:Speeding scripts up
« Reply #57 on: 17 Dec 2005, 00:58:59 »
>and above it was stated that a script isn't deleted from memory unless you use the exit command in it.

or by default. an exit command is implied at end of any script.

a script only stays resident in memory if there's a permanent loop in it.

thus

exec= anything

achieves nothing speed wise in terms of holding that script in memory if 'anything.sqs' doesn't loop, permanently.

you can very easily validate this assertion by

create a call to a non looping anything.sqs in an init.sqs (eg)

have the same animal called by (say) a radio trigger.

alter the text in anything.sqs while the game is running.

the text will change in a non-looping script, but not in a 'resident' one.
Just say no to bugz

PheliCan

  • Guest
Re:Speeding scripts up
« Reply #58 on: 18 Dec 2005, 12:40:42 »
Very interesting diskussion! Just one question to make things clear...
Which one is better:

Code: [Select]
_i = 0
#arrayLoop
_current = _array select _i
do some stuff...
_i = _i + 1
~0.01
?(_i < count _array) : goto "arrayLoop"

or this one

Code: [Select]
{ [_x] call do_some_stuff_function, ~0.01 } forEach _array

This variant would also be interesting:

Code: [Select]
_i = 0
#whileLoop
do some stuff...
_i = _i + 1
~0.01
?(_i < 20) : goto "whileLoop"

or this one

Code: [Select]
_i = 0
while "_i < 20" do { [_i] call do_some_stuff_function, _i = _i + 1, ~0.01 }

The reason for using a functon within the forEach- and while-commands are only to be able to use new lines (to better organize the scripts). Would the forEach- and while-commands solve the problem with re-reading the files (from memory)?

Offline THobson

  • OFPEC Patron
  • Former Staff
  • ****
Re:Speeding scripts up
« Reply #59 on: 18 Dec 2005, 13:07:33 »
It depends what you mean by better.   The first is a loop involving several lines, the OFP engine can step through those lines doing other things as well in between each line.  The single line forEach (or while do) instruction will be executed faster, but will therefore prevent the engine from spending time doing other stuff.  So it really depends what you want to do.  Is it essential that all these things get done quickly or that the engine is freed up to do other stuff while it works on it.

In any event _array needs to be pretty big or the stuff you are doing to the elements of the array needs to be pretty significant for you to notice much differenece

Other points:

- you don't need the  0.01 wait in the loop
- the 0.01 wait will not work within a block of code delineated by {}
« Last Edit: 18 Dec 2005, 13:09:41 by THobson »