OFPEC Forum

Editors Depot - Mission Editing and Scripting => OFP - Editing/Scripting General => Topic started by: Mikero on 20 Aug 2005, 05:43:08

Title: Speeding scripts up
Post by: Mikero on 20 Aug 2005, 05:43:08
Am posting this here in lieu of anywhere else. The following is a tiny information tute on scripts in general and how to speed them up. All comments welcome, and probably the most important comment would be if this is useful enough to warrant an upload as a submisssion. I dont believe this knowledge is generally known

moderators forgive me if I've got the wrong thread.
----

The ofp engine reads a script (.sqs) or function (.sqf) file once, for it's lifetime. It retains a copy of that script 'in memory' until it encounters an exit statement or equivalent. Only after this point, if the script is required again, is it re-read from file. Exit statements in themselves therefore, induce lag wherever a file read is caused.

At no time does the engine retain a compiled version of the script. Each command is individually processed as and when. This is termed interpreting in the trade (as opposed to compiling).

For the lifetime of a script, any 'goto' style command causes the script to be re-interpreted from it's beginning to wherever the #target is. Read that statement carefully and dismiss any ideas that it sometimes 'jumps ahead' or 'skips' or does something special. Any goto, causes an unconditional re-interpret, from the beginning. Dismiss your ideas of how this works because they caused you lag.

For this reason, large initialise scripts can measurably cause lag. How much lag is the same as a length of string, but it would be perceptible. The reason is of course that useless text is re-'interpreted' and discarded.

The traditional approach in almost all scripts is as follows
//////////////////////
do lots of initialise
...........
and lots more

#loop
.....
goto loop
//////////////////////
to speed scripts up

goto initialise

#loop
.....
goto loop

#initialise
........
goto loop

putting any necessary exit commands as appropriate of course, but the intent here is to get the #targets as close to the top as possible.
Title: Re:Speeding scripts up
Post by: UNN on 20 Aug 2005, 07:28:12
I understand your point, and your suggestion wont harm even if it's not the case, so I will probably follow that format. But how can you be so certain that it does re-interpret at every goto. Is it possible to test?

I would not have thought twice about it, until I started looking through the save game files using CoC's BinView. You can view each script, what point it was at when saved and what values where in the variables.

If OFP organises the scripts in memory the same way it does in the saved game, then there is nothing to worry about?
Title: Re:Speeding scripts up
Post by: macguba on 20 Aug 2005, 09:33:28
I'll leave this thread open because I'm sure it will stimulate some intelligent debate.  

@mikero please write this up as a wee tutorial and submit it to the Ed Depot.    

Information at OFPEC is always stored in the Ed Depot, not the forums.
Title: Re:Speeding scripts up
Post by: Mikero on 20 Aug 2005, 14:04:50
@UNN

>wont harm even if it's not the case

wisest remark anyone can make.

>How certain are you?

100% cast iron, gold plated, set in concrete garantee that a script file is re-interpreted from the 'top of file'. I am trying to use great caution in not inferring the file is re-read (another common misconception)

One test, albeit not conclusive is

#loop
----
goto loop
----
#loop
----
goto loop

very first #loop, is the #loop. There are other tests and it has been thrashed around over time, in the forums. I am unable to find the relevant threads right now, and prefer not to as there is additonal 'noise' in them which only serves to confuse the issue (or at least blurr it)

I am not familiar with the coc scripts and never have had success with that excellent tool, binview when reading fps files. What I can tell you is that the stored contents of these, and so-called 'binary' mission.sqm's, config.cpp's and description.ext's are in fact tokenised versions of the scripts where, simply stated, any repetetive text strings are only held once and tokens used to access the same string. They are not binary or compiled in the normal sense of the word.

I have a little  blurb  (http://andrew.nf/OFP/BedtimeReading/bin_2_cpp_compression.htm) on that subject.

But, you definitely get the last word of wisdom here, it can't possibly be worse, and is likely to improve things.

Title: Re:Speeding scripts up
Post by: Terox on 20 Aug 2005, 18:22:53
I didnt know this, so as per the above post, i ran the following script

Quote
_count = 0
#LOOP
_count = _count + 1
~1
hint "first loop"
if(_count >1)then {goto "LOOP"}
#LOOP
~2
hint "second loop"
goto "LOOP"

and the first "Loop" label was used, not the second

Now my evaluation on this is either
a) The label is searched from the start of the script (most likely)
b) the position of the label is kept in memory somehow maybe via line number(doubtful)


anyway, from now on all my scripts will be wrote with goto "INIT" as the first line and then the actual main loop will be at the top of the script


 :)
Thx for the info
Title: Re:Speeding scripts up
Post by: UNN on 21 Aug 2005, 03:42:16
From the point of view of, best practice. Pushing the init routines to the end, is not up for debate. But I was curious about testing it, and just understanding more about how OFP handles scripts. So if this is offtopic, my apologise.

In BinView, labels are stored as individual items, and appear to be given an index by OFP.

(http://homepages.gotadsl.co.uk/~gssoft/binview01.jpg)

Again in BinView, each line of code is stored just above the labels. In this case the repeat loop is right at the end:

(http://homepages.gotadsl.co.uk/~gssoft/binview02.jpg)

The line the Repeat loop points to (32_BIT_SIGNED 16) is here:

(http://homepages.gotadsl.co.uk/~gssoft/binview03.jpg)

If I can track down and change the Label index I should be able to test it. In theory I should be able to save a script before it reaches the last label, then hex edit the saved game to point to another label. If the script executes as normal and ignores my changes, then that proves the saved game format has no relation to the script in memory. If it does respond to my changes then we know OFP does not re-read every line, it still probably has to loop though the script to find index 16. So either way, you might be reducing potential lag by putting the init stuff at the end, and loops at the start.

Ok so this is definetly off topic, WaitUntil is used by @ and suspend until is used by ~.

But quickly back on topic again :) To make life that bit easier, you can setup Chris' OFP Script Editor with a suitable template:

Code: [Select]
;Initialise the script
goto "Init"

;Start the main body
#start



;End the script here
Exit

;Initialise the script here
#Init

_This Select 0

;Return to start
goto "Start"
Title: Re:Speeding scripts up
Post by: Mikero on 21 Aug 2005, 07:56:37
wow UNN you made that easy to understand!

My best, almost certain, bet, is that if you alter via a hexedit, the changes will occur in the 'game'.

This because, only non-exiting scripts, as in quote "one's in use at the time of the save" are actually saved in the fps.

that might incidentally, startle you, perhaps not. To give an example

fred.sqs

_thing=50;
exit

thing.sqs will not be found in the fps file (unless by extraordinary coincidence it happened to be 'in use' at the time.

Editing script files 'on the fly' can only happen if they are exiting scripts. One's that stay resident, eg anything on a permanent loop cannot. And the only way to change their behaviour is to hexedit. The fps file for this purpose, is much the same as saying 'resident in memory'.

This is not off topic for me because it leads back to the statement that exiting scripts, by their nature, are lag inducing (they cause a re read, re-binarise, of the 'file')

As for the very interesting decodes in your binview the 'strings' have associated with them an 'index number' a value you can clearly see in your example. It is these index numbers and these strings which make up the so-called 'binary' mission.sqms and are the object of affection in Amalfi's bin2cpp and cpp2bin utilities.

For me, the phrase re-interpret, means re-interpreting the strings present in the above tables you illustrate. Admittedly, considerably less cpu crunch overhead in interpreting them opposed to stripping out semicloons tabs and newlines from a text file, but re-interpreted all the same. There is no hash index construct to go directly to a given line entry. A shame.

So, now I guess it is wandering off topic and I'll take Macguba's suggestion to upload it as a submission.

Thank you all, for your input. I can see it's already been beneficial for people.
Title: Re:Speeding scripts up
Post by: Terox on 21 Aug 2005, 10:02:46
Ref the "Exit"
could you explain this more

are you saying that

The following examples:
 A ( a looping script)

Quote
goto "INIT"

#Loop
~1
XXXXXX
XXXXXX
XXXXXX
XXXXXX
If(XXX)then{goto "LOOP"}else{exit}

#INIT
XXXX
XXXX
goto "LOOP"

 and
 B (a linear script[/b][/color]
Quote

XXXXXX
XXXXXX
XXXXXX
XXXXXX
exit

are better of being re written in the following way

Quote
goto "INIT"

#Loop
~1
XXXXXX
XXXXXX
XXXXXX
XXXXXX
If(XXX)then{goto "LOOP"}

#INIT
XXXX
XXXX
if(_time <1)then{goto "LOOP"}

 and
 B (a linear script[/b][/color]
Quote
XXXXXX
XXXXXX
XXXXXX
XXXXXX
;;;;; exit


surely it must use some process to see that the EOF has been reached, i could see the advantage of not using an exit on a linear script, but a script with a loop, the only place you can really "goto"for an EOF, is a label at the end of the script , which means the script must be "re read, a line at a time to get to the label and therefore the difference in cpu requirements must be minimal or as my example below, allow the script to rerun the ~init section and not loop using a _time query (which may be impracticle in a large script with lots of loops), which again, then by your above findings suggests that all larger scripts should be reduced into several small scripts, as many as possible being run in a linear system calling upon the next script in the chain as the previous one completes


If you could expand on the none use of the "exit" command, i would be most appreciative

infact just an overall good and bad practices, on an advanced level, would be very welcome
Title: Re:Speeding scripts up
Post by: Mikero on 21 Aug 2005, 10:37:04
ooooooo sorry for that, just shows how careful I have to be with phrasing things.

the exit statement or equivalent is the definition of a non resident script. One that will, by it's nature will cause a file access to REoccur if it's called again, because it is dumped from memory as no longer required. Resident scripts do a single file read in contrast to this.

I did not mean to imply (as I did do) that the exit function itself was a criminal.

Nor do I have remedies to prevent file reads. The point here was to enable folks to understand that goto statements cause scripts to be reINTERPRETED, not re read from file (which has often been the assumption)

Your script examples are 'correct'. The emphasis here is to always put #targets as close to top of file as possible.
Title: Re:Speeding scripts up
Post by: Terox on 31 Aug 2005, 18:38:27
so , if i've got this right

"exit" has the same effect on a script

that
myvariable = nil
has

so as basic coding practice
use EXIT for only run once type scripts (eg INIT.sqs or an INTRO)
and
Omit EXIT if the script will be called to run more than once (eg from an eventhandler)

which probably means that multiple calls on a looping script would be better having a goto "End" and have that label at the very bottom of the script, rather than an exit somewhere

One thing i am curious about though

if a script has no more lines of code to be read, how does the "read script" system know when to stop looking for lines of code
Title: Re:Speeding scripts up
Post by: Baddo on 31 Aug 2005, 19:49:59
One thing i am curious about though

if a script has no more lines of code to be read, how does the "read script" system know when to stop looking for lines of code

As far as I know anything, OFP will do it just like any other program written in C++.

Something like:

Code: [Select]
...
string s;
while (getline(fin, s))
{
    cout << s << endl;
}
...

will see when the EOF arrives and stops there. Why? Because C++ functions cin and getline set the eofbit (and in some cases, also failbit) to 1 when they arrive at the end of the file. This is just an example, I am not saying cin or getline is used for the job but the same applies to other methods too.

From some website:
Quote
Operating systems need to keep track of where every file ends. There are two techniques for doing this: One is to put a special end-of-file mark at the end of each file. The other is to keep track of how many characters are in the file.

So, when OFP reads a file stream, it encounters a special mark in the file and knows to stop there.
Title: Re:Speeding scripts up
Post by: Mikero on 31 Aug 2005, 21:52:08
@Terox

sorry, you got it wrong. You're focussing on the wrong end of the barrel (exit)

Quote
which probably means that multiple calls on a looping script would be better having a goto "End" and have that label at the very bottom of the script, rather than an exit somewhere

'multiple calls' automatically implies that the script being called has, one way or the other, exited. Stopped, ceased to exist, no longer running, not resident in memory. You wouldn't be calling a still active looping script.

'goto end' would be the worst possible outcome because it would cause the ENTIRE script to be re-interpreted to it's END just to exit (gasp). In fact, this is a perfect example of when you *would* use an exit statement there and then, to cease and desist.

also 'event handlers' calling looping scripts is a shocking crime punishable by measles, the pox and hopefully bouts of dysentry. It is the worst lag demon of them all (unless large ~waits are used). An 'event' is an interrupt. Something you want to have handled quickly and then get the hell out of there, quickly.

@baddo said it all, an end of file (EOF) is an implied exit.
Title: Re:Speeding scripts up
Post by: THobson on 31 Aug 2005, 22:36:47
I have never written a code interpreter, but if I were to do such a thing I think I would have code that:
1. stripped out all the leading blank characters from a line
2. read the next character.
3. Take any appropriate action

In OFP the first non-blank characters of lines can have special meanings ( ; @ # ~ &)  So to find a lable it would be necessary initially to find the first non-blank character of a line == # then check to see if this is the target lable.

If that is how it is done then things that would effect the performance of a goto lable combination would be:

The number of lines above the target lable
The number of blank characters at the start of all the lines above the target lable
The number of lables above the target lable.

Is this making sense?
Title: Re:Speeding scripts up
Post by: macguba on 31 Aug 2005, 23:37:43
Gentlemen,

When this discussion has run its course, it would be good if somebody could write up the conclusions into an "idiot's guide to making your scripts run faster" tutorial.     You don't need to understand the technicalities to benefit from simple advice like using "exit" directly rather than "goto #end".
Title: Re:Speeding scripts up
Post by: KTottE on 01 Sep 2005, 00:07:40
A lot of the scripts I see, and some of the ones I've written myself, use labels for flow control.

? (expression) : goto "label"
? (other_expression): goto "other_label"

How bad is this, assuming that the checks are done at the top of the file? Would performance be gained by using if {} then {}-statements instead?

What is the relative performance hit of instantiating new scripts as opposed to jmp:ing to labels? Would the first example be better than the second:
Code: [Select]
; first.sqs
? (expression): [] exec "second.sqs"

; second.sqs
; Do Magic.

Code: [Select]
? (expression): goto "second"

; Lots of stuff here

#second
; Do Magic.

Thanks for bringing this to our attention, Mikero, I know of a lot of scripts that need rewriting :)
Title: Re:Speeding scripts up
Post by: Mikero on 01 Sep 2005, 02:56:53
@Thob

much of what you mention above is done on the file read. These wrinkles are parsed out of the script there and then. It raises the issue of a performance hit accompanying any file read that also requires comments to be stripped out as well (sigh). But, in terms of goto's there is NO distinction or performance problem with 1 squillion lines of comment and white space because they are only read (parsed) ONCE (per file read). They do not affect loop time. Only the amount of statements above the #label affect it.

What the engine does is parse all strings of text into a linked list on any file read. This is separate to interpreting, done later.

Each 'object' of the list is a string, containing some pertinent, humanly legible command(s) and a pointer to the next command string, which, conceptually, is the next line of the script. (In fact, the next line of script that isn't a comment a whitespace a tab character eg)

You can see this 'parsing' in the example jpegs provided earlier

If you look reasonably carefully, you'd notice that none of the actual command syntax is touched in any way, it is simply parsed, by the engine, so that only the truly relevent text instruction is kept and all the noise, all the comments, are removed. The important bit is none of this is 'compiled' because if it were, then whether goto's were 90 miles down the page would be irrelevant because they would have been compiled into binary jumps.

Because it is a sequentially ordered, linked list, of statements (which is a very reasonable way to do things), in order to find anything[i/] in that list you have to start at the first one and search for it (DUH!). The engine can't magically 'skip over' 30 lines of text,  to the intended goto label because that label might in fact (and mostly is) behind it. It is the fact that most labels are behind, not ahead, of the goto that makes a 'forward search from current position' intensely wasteful. Most searches would fail, and the engine would have to go to the top and try again. The bis engine simply goes to the top in an all bets are off.

>The number of lines above the target lable

the number of command statements above the target lable, given that there will be a performance hit, once, to get rid of any comments 'above the target label' that is why

goto init
#loop
goto loop

is the fastest performance (search) you can do.

>The number of blank characters at the start of all the lines above the target lable

nope. They don't affect the performance of the loop (as you imagined) they do affect the performance of the script of course.

>The number of lables above the target lable.

nope. Each 'string' is inspected with equal vigour for a match against the label's name. The reason for using # (or any other unique symbol) is to mismatch the search strings asap. An instant first character mismatch between what are labels and what are mostly not labels in the linked list. There would, of course, be some small crunch in examining progressive characters in each encountered #label, but that's trivial. To reinforce the point here, labels are not treated in any special way in the link list, there is no 'tag' to say "i am a label" there is no hash index to say, 'here are the labels'.  Labels are termed 'unnassigned strings' exactly the same way any command with no paramaters is. 'Assigned strings' are items like elephants = 123.4

these are the only two (of four) possible tags used to describe what a 'string' is in a link list (from the piccies above). The other two tags are classes and arrays. This parsed string and it's linked list is the foundation of how the engine creates 'save' files.

Title: Re:Speeding scripts up
Post by: Mikero on 01 Sep 2005, 06:43:46
@KTottE

Quote
What is the relative performance hit of instantiating new scripts as opposed to jmp:ing to labels

Frightening. An exec requires a file to be read AND parsed (but not interpreted), an existing internal script requires only to be re-interpreted. The _exception_ to this _might_ be sqf (functions) I believe the real difference between them and sqs is that functions remain, as parsed, in memory. ( I am aware of specific differences between sqf and sqs but I suspect *this* is the real difference)

In that instance a sqf call would not be slower, but still need a string search thru a 'resident-in-memory' table similar to a goto #label in most practical respects.

An easy test would be to edit a sqf while a mission is running, if the differences show, then it aint resident and the above is waffle.

> Would performance be gained by using if {} then {}-statements instead?

your first example shows a which statement, 2nd, a classic if else. Anyone sane would assume the which is interpreted faster. The oppposite is true. A which needs massaging back into an if else, each time, every time, it is encountered. The if else construct on the other hand is already predisposed to the internal string tables mentioned earlier {if else is a pseudo class tag, it contains 'body'} my braces are intentional :)

But i think I misread your Q. 'classic' if else will beat any form of goto each time every time because all statements that affect the if (or the else) are forward references
Title: Re:Speeding scripts up
Post by: THobson on 01 Sep 2005, 09:24:37
Mikro:

Thanks for that.  I actually understood it.

On the overall issue of performance and speed of scripts, I think it is worth considering how often and how fast the relevant loops are.  If I have a script that runs once every 5 minutes or so and has a loop through half a dozen units I am not going to be too worried about how fast it all runs.  Other scripts can obviously benefit from a lot of optimisation.

Bye the way, in my experience:

{set; of; instructions} forEach [set, of, elements]

completely blows away a goto/loop combination in terms of how quickly it executes.  But paradoxically it does seem to contribute to lag in the mission as the cpu at least appears to be more dedicated to the forEach instruction than to doing other stuff.  This makes sense as it is only a single instruction, but a lot of work can be packed into it.

Title: Re:Speeding scripts up
Post by: Fragorl on 01 Sep 2005, 09:49:57
With nothing much to add as yet, I thought I might just say: very interesting topic, and I'm following it closely
Title: Re:Speeding scripts up
Post by: Terox on 01 Sep 2005, 16:45:07
Ok, the reason i misinterpreted the info was because
I already took it that when a script runs it's course, it terminates and removes itself from resident memory

But something so basic was posted in this forum, it then threw me, and i assumed there was more too it.

It was like saying water is wet, but this particular drop is also wet, if you get my meaning




So basically, to summarise this thread

1) Always start your looping scripts with goto "INIT"
2) Have your fastest or most called loop at the top of the script and any additional loops up at the top also

for example

SCRIPT
Quote
goto "INIT"

#LOOP
~1
XXXXX
XXXXX
XXXXX
XXXXX
if(condition == whatever)then{goto LOOP}else[exit}

#INIT
_a = _this select 0
_b = whatever
goto "LOOP"


and the reason for the above layout
When a goto command is issued, the scripting engine will start searching for that label at the top of the script, working its way downwards.
So the closer the label is to the top, the less searching it has to do per loop
Title: Re:Speeding scripts up
Post by: Bluelikeu on 01 Sep 2005, 17:02:00
@KTottE Frightening. An exec requires a file to be read AND parsed (but not interpreted), an existing internal script requires only to be re-interpreted. The _exception_ to this _might_ be sqf (functions) I believe the real difference between them and sqs is that functions remain, as parsed, in memory. ( I am aware of specific differences between sqf and sqs but I suspect *this* is the real difference)

In that instance a sqf call would not be slower, but still need a string search thru a 'resident-in-memory' table similar to a goto #label in most practical respects.

An easy test would be to edit a sqf while a mission is running, if the differences show, then it aint resident and the above is waffle.

> Would performance be gained by using if {} then {}-statements instead?

your first example shows a which statement, 2nd, a classic if else. Anyone sane would assume the which is interpreted faster. The oppposite is true. A which needs massaging back into an if else, each time, every time, it is encountered. The if else construct on the other hand is already predisposed to the internal string tables mentioned earlier {if else is a pseudo class tag, it contains 'body'} my braces are intentional :)

But i think I misread your Q. 'classic' if else will beat any form of goto each time every time because all statements that affect the if (or the else) are forward references


How can we be sure of how the BIS team built OFP? Based on what I've seen, programmers nowadays don't care how ineffient their code may be(except for some older-generation people), they rather rely on the idea that faster and better processors will come out to subsitute for their lack of thinking things out before writing them. Also, why do we expect that BIS did a good job in programming? The behaviour of scripts is erratic and you can never count on them even 80% of the time to do what you really want them to. I'm not bashing the BIS team but I think that they could have done a better job. There is absolutely no reason that the scripts should be unreliable.
Title: Re:Speeding scripts up
Post by: Mikero on 02 Sep 2005, 01:55:11
@Terox

thank you. That was the point behind the post, neatly summed up by you. We can keep extrapolating examples but that's all they'd be. The principle is to keep recurrent labels at the top and develop techniques to achieve that.

But the rest is not off topic it's just a fascinating (for me) discussion on some internal workings of the engine.

@Blue

>how can we be so sure

scripts but most especially classes, smacks C++ and (most of) what is discussed here has been about parsing the text, not the actual execution or interpretation of those commands. It is unlikely Bis would re-invent wheels from a 'standard' way of parsing these things. Looking at one of the dlls in the game (I forget name, it appears to have a yacc parser embedded in it, perhaps not). Everything about the so-called 'binary' encrypted missions and anything I've ever seen in a save file has structures that would be produced if Bis followed classic parsing techniques.

It is only because Bis do not, then, go on and compile these 'stamements' into binary code that parsing has achieved an importance it doesn't deserve.

@Thob
>{set; of; instructions} forEach [set, of, elements]

that would help to explain some wrinkles where video is clearly not lagging but the game goes temporarily awol, lots of where are you's , things like that. The 'engine' can't keep up with background tasks, but 'appearance' seems normal.
Title: Re:Speeding scripts up
Post by: Sui on 05 Sep 2005, 08:58:02
BIS have come a long way since OFP v1.0 in regards to scripting performance... that's for sure.

I remember the days where you could easily create infinite loops by forgetting to put pauses in them! ;D
Some of us had to learn the hard way over and over again not to do that... ;)

the foreach syntax (and I think also the count syntax) are by far the quickest in execution in my humble experience. I often use them for tasks that need 'instant' results, as it seems to take a whole chunk of CPU time and execute the whole operation at once.

eg.
"unit addmagazine {M16}" foreach [1,2,3,4]

Not a 'time critical' example that, but a good example to show that you don't need to include the use of an _x in a foreach loop.

However I did once managed to pause flashpoint for about 10 minutes using a monster foreach call... it ran fine after it had finished, but literally 1 frame every 10 seconds while it was trying to execute it ;D
Title: Re:Speeding scripts up
Post by: THobson on 05 Sep 2005, 09:21:08
Quote
"unit addmagazine {M16}" foreach [1,2,3,4]

That is one of those - 'now why the hell didn't I think of that' bits of code.  Very neat.  
Title: Re:Speeding scripts up
Post by: h- on 05 Sep 2005, 17:00:41
This is actually very nice find...

We tested this on some MCAR guidance codes and it seems to have even visible impact on how the code guides an object... :P

Of course that may be caused by many other things, psychological 'wrinkles' being one of them and of course OFP itself but somehow it seems to make things a bit smoother since the initializing part of the guidance code is very long before any labels are reached...

So this gave one more optimizing method to use :)
Title: Re:Speeding scripts up
Post by: shinraiden on 25 Sep 2005, 07:35:40
BIS have come a long way since OFP v1.0 in regards to scripting performance... that's for sure.

I remember the days where you could easily create infinite loops by forgetting to put pauses in them! ;D
Some of us had to learn the hard way over and over again not to do that... ;)

the foreach syntax (and I think also the count syntax) are by far the quickest in execution in my humble experience. I often use them for tasks that need 'instant' results, as it seems to take a whole chunk of CPU time and execute the whole operation at once.

eg.
"unit addmagazine {M16}" foreach [1,2,3,4]

Not a 'time critical' example that, but a good example to show that you don't need to include the use of an _x in a foreach loop.

However I did once managed to pause flashpoint for about 10 minutes using a monster foreach call... it ran fine after it had finished, but literally 1 frame every 10 seconds while it was trying to execute it ;D

You should still be able to make a runaway mem leaking hard lock by looping a drop[] with no ~_time in the loop. Even ~0.001 is more than enough to prevent the meltdown.

Further confirmation of some of this discussion I think can be found in the fact that if you exec an unpbo'd script, edit it while the engine is still running, then exec it again, the new script is processed. An example would be to make a 0,0,1 triggered script that displays a hint, exec it, alt-tab and change the string and save, then alt-tab back. Note the new results. Btw, that makes dev for missions and addons exponentially faster. Also, this does not work for files called with PreProcessFile for obvious reasons, as they are pre-loaded.
Title: Re:Speeding scripts up
Post by: Dinger on 26 Sep 2005, 19:03:14
ah yes...
Quote
Not a 'time critical' example that, but a good example to show that you don't need to include the use of an _x in a foreach loop.
[granpa voice] Back in my day, we didn't have Call, and ForEach on a null value was the only way to do call a string.[/granpa voice]

Now, the discussion:
The goto method described is indeed how I learned that Commodore BASIC processes things.
Whether OFP does, is a different question.
Frankly, I've seen a lot of debate over scripting efficiency over the years, and few people actually basing their theory on anything other than conjecture and parallel reasoning. Folks, those aren't theories, those are hypotheses.

UNN's objection is a valid one, and I'll build on it:
You can save the game state at any time, and look at how the data is stored.
If you look at the scripts section, you'll find that scripts are stored, stripped of comments and blank spaces, in a line-by-line fashion, with each line assigned a number.
Labels are stored separately, as pointers to the line that follows them. This seems to be true even if you have a bunch of labels at the end that are never used.

Therefore, it seems safe to assume -- in the absence of other evidence -- that the method Mikero describes is not the case. Now it may very well be that it is faster, but not for the reason given -- any performance improvement will be a result of the label appearing towards the top of the stack of labels. In any case, I'd like to see some verifiable tests before taking this as dogma.


As for Shin's hypothesis, it's hard to say what exactly happens.
REmember the "preview" in the mission editor is not necessarily the same as running a mission from the mission pbo: the mission editor preview intentionally allows the editor to change scripts "on the fly" -- at least those not running in memory (n.b., if a script is running in memory in the mission editor, and you alt-tab out, change the .sqs, and try to start a new instance of that script, it'll run the old one).

So it has not been determined whether on mission start, the whole mission pbo is loaded into memory, or only the parts that are needed. However, if mission pbos work anything like addon ones (and there is little reason to doubt it), one would suspect they and their contents are mapped to memory at mission start.

So my counter-hypotheses, backed up by what little OFP experience I have:

1. Outside of Mission Editor missions, missions are mapped to (real or virtual) memory
2. When a script is called, it gets parsed into the "game-state":
2.1 Comments and white space are stripped.
2.2 Functions and Wait instructions (@ and ~) are assigned line numbers
2.3 Labels are mapped to line numbers, and put in a separate pile
2.4 Any improvement gained by rearranging file lists is negligeable when compared to the performance costs of parsing instructions at runtime.

Now go show me I'm wrong.
Title: Re:Speeding scripts up
Post by: KTottE on 29 Sep 2005, 08:41:44
Heh, so we'll accept Dingers statements as true for now then. Doesn't seem like anyone is up to the task of disproving his statement :)
Title: Re:Speeding scripts up
Post by: THobson on 29 Sep 2005, 08:58:54
If I recall correctly  the view that goto lables are better placed near the top of a script came about at least in part from the fact that

@ condition

is much less laggy than

#wait
~0.5
if inverse_condition then {goto"wait"}

There then followed some discussion as to why that might be.

Dinger is certainly correct.  It is easy to imagine what is going on in the engine, and even to check it by looking at some of the saved bin files.  We just need people with the energy to test it all in mission to see the impact.

Title: Re:Speeding scripts up
Post by: Dinger on 29 Sep 2005, 13:49:55
As well all know @ statements will run at least as fast as any kind of goto loop with delay.
@ tells the engine to check the condition every time slice.
~.0001 wait and a goto has the effect of telling the engine to process all the other scripts running before coming back.
Title: Re:Speeding scripts up
Post by: THobson on 29 Sep 2005, 16:51:23
Maybe I wasn't clear.  

The @ command generates much less video lag than a goto loop with a 0.5 second delay in it.
Title: Re:Speeding scripts up
Post by: h- on 30 Sep 2005, 12:40:50
@THobson
Doesn't that really depend on what you have inside the loop...

If you have nearestObject call(s) or for example camCreate/createVehicle or drop, those will kill your FPS in matter of seconds when using @...
Title: Re:Speeding scripts up
Post by: Dinger on 30 Sep 2005, 12:49:39
How are you measuring "video lag"?  (I presume you mean an FPS hit)
Title: Re:Speeding scripts up
Post by: THobson on 30 Sep 2005, 14:20:39
Basically I wanted to test the view that an @ is always more damaging to lag than a loop.  I placed 5,000 empty BMPs on the map each with a script run from their Initialisation field.  In one run through the script had the following:

@ (getDammage thisunitname > 0.98)

and in another run through it had

#wait
~0.5
if (getDammage thisunitname <=0.98) then {goto"wait"}


Provided the player could not see the BMPs : With the first there was no detectable lag at all.  With the second the lag was ugly, really ugly.  I even changed the script so there was a random delay at the start so that all the 5,000 scripts were not naturally synchronised.  Still it was very ugly.

I agree that with more complex condition statements the loop might be better than the @  Each case should be tested specifically.  The purpose of the exercise was for me to determine which of these constructs I should use in my own mission to detect when a vehicle was badly damaged.  The answer was the exact opposite of what I was being told would be the case.

It was in trying to understand why this was the case that, I believe, led to the thinking that the placement of the labels might be having an impact.

EDIT:

Video lag - yes I mean fps.  The reason I specified video lag is that there does seem to be another, more subtle, underlying lag issue that manifests itself in the form of dumb AI, even though the fps remain okay.

EDIT2:

I believe this might be the thread that started this:
http://www.ofpec.com/yabbse/index.php?board=6;action=display;threadid=24542

Title: Re:Speeding scripts up
Post by: Dinger on 01 Oct 2005, 16:46:22
Lag is the time delay experienced in network games as a result of network travel time, available bandwidth and server load.


The test in question is interesting, but "scanning for labels" is not the only or the most compelling explanation.

With 5000 scripts, it would not be unexpected to see each script get called at an interval greater than .5 seconds, so in this case ~.5 might be equivalent to an @ command.
Also, starting with a random increment isn't going to be enough for the same reason: it'll exceed each of those waits and end up bumper-to-bumper anyway.
Moreover, since each line of code gets allocated an approximate temporal value, running a goto loop has twice the lines of code (~, ?:goto) as an @ statement, so right there you could get twice the slowdown -- maybe three times.
And the conditions are not quite equal. > and < are, cpu-wise, much less intensive operations than >= and <=.


So this test would demonstrate that if you're going to run a whole bunch of scripts checking a condition, and those scripts have an ~ interval shorter than the time needed to run through all those scripts, than an @ statement may be more efficient.
Title: Re:Speeding scripts up
Post by: THobson on 01 Oct 2005, 18:31:54
I am not sure that is what people usually mean by lag on this forum.

I tried it with < and > as well as with <= and >=.  There was no detectable difference.

I am not with you on the bumper to bumper logic.  The number of lines look an interesting explanation.

In any event I chose to use the @ command in this particular case.
Title: Re:Speeding scripts up
Post by: Dinger on 01 Oct 2005, 19:57:16
the term "lag" gets abused. It aids greatly if the terms are used properly.
Lag refers to a delay. FPS chunders and FPS slowdowns are not delays, but decreases in performance. That's why they're not lag.
"Lag" got applied to FPS problems incorrectly as a result of poorly-written MP code (like Doom, HL1, and so on), which required all units to be synchronized to proceed. An out of synch unit would freeze or create stutters. Soon by synechdoche people were blaming all stutters on "Lag".
It's not a problem for general diagnostics, but when you're trying to sort out complex phenomena, such as what produces the best performance for the network, the individual CPUs and so on, you need to be precise.

"Bumper to bumper" logic isn't logic, it's a conjectural explanation of the phenomena, just like "searching through for the label" is a conjectural explanation of the same phenomena. The only difference is that I've offered evidence (analysis of savegame files) that falsifies the latter conjecture.
I'm glad that < and =< have no discernable difference: that indeed suggests that the actual time spent by the CPU processes the instruction is insignificant compared to the time spent parsing it.
Anyway, good luck with your research.
Title: Re:Speeding scripts up
Post by: h- on 02 Oct 2005, 09:37:44
Quote
the term "lag" gets abused. It aids greatly if the terms are used properly.
Lag refers to a delay. FPS chunders and FPS slowdowns are not delays, but decreases in performance. That's why they're not lag.
"Lag" got applied to FPS problems incorrectly as a result of poorly-written MP code (like Doom, HL1, and so on), which required all units to be synchronized to proceed. An out of synch unit would freeze or create stutters. Soon by synechdoche people were blaming all stutters on "Lag".
It's not a problem for general diagnostics, but when you're trying to sort out complex phenomena, such as what produces the best performance for the network, the individual CPUs and so on, you need to be precise.
I have heard that to be called desync (as well)... :P

Everybody I know refers  to the decreasements inf performance/FPS dropdowns as lag though...
Maybe because the phenomena is a bit similar to the desync happening in MP games, like you described..

I think the same goes with for example sound engineers/professionals "you want to add echo into some sound? There is no echo, there's only delay or reverb"... :P

Btw, what would then be the correct term for the decreasement in performance/FPS dropdowns? Or is there a term for that?
Title: Re:Speeding scripts up
Post by: penguinman on 02 Oct 2005, 22:48:54
i know this thread is about speeding up scripts so i was wondering,

if you were trying to catch a bullet at a certain time, would running two scripts at the same time increase the chanses of catching it. because i know scripts arent fast enough to catch a bulllet at a certain point.
Title: Re:Speeding scripts up
Post by: Chris Death on 03 Oct 2005, 03:53:56
@Thobson there's just one thing that makes your test not
equal in comparisation:

@ (getDammage thisunitname > 0.98)

Here you check for damage being higher then 0.98



#wait
~0.5
if (getDammage thisunitname <=0.98) then {goto"loop"}

And here you check if damage is lower or equal than/to 0.98

Also like Dinger already pointed out that by such a mass of
scripts running, the @ command will lose it's efficiency of cycling.
Suma once said IIRC that the @ command will check for every
frame or so, what again supports Dinger's theory in this case.

You should enhance your test by checking out what will happen
if one of those 5.000 tanks reach the damage limit;

Which variant will then react faster?

Also you should maybe make equal checks for both variants:

#loop
?(dammage _thisunit > 0.98): exit
~0.5
goto "loop"

You might now say that you were saving one line with your
version, but you and i cannot for sure say that a complicated
one-liner is more eficient than 2 simple lines - only a new test
can proove right or wrong again.

And btw - yes - LAG is the result of desync and will show up
like: you shoot another guy but he doesn't die because on
his end of the internet he is already somewhere else, just
your pc didn't get the right info yet.

But ppl tend to use the word LAG for things like: performance loss
or framerate dropdown aswell - probably because LAG is such
a nice word, or just simple 3 characters, or just accepted by
almost the entire comunity to be used in this case.
But that's not the point of the discussion me thinks.

Quote
Btw, what would then be the correct term for the decreasement in performance/FPS dropdowns? Or is there a term for that?

I would tend to say: slow-motion or dia-show (slide-show)  ;D  ;)

~S~ CD
Title: Re:Speeding scripts up
Post by: THobson on 03 Oct 2005, 09:02:55
You will see in a later post that I also used < > as well as <= and >= in the tests, with no obvious effects.

I don't understand how your four liner helps - I don't want it to exit.  There would need to be another goto and another lable, making it quite an ugly construct.  (Note there was a typo in my original that I have now corrected - I had goto"loop" when it should have been goto"wait")

Your comment that 5000 scripts will make the @ lose efficiency is interesting.  The @ performed very well in the test it was the loop that died.

I am sure there are many ways of defining lag.  The OED has many one of which is 'fall behind'.  When the video output falls behind the action that is lag just as much as lag occuring over a network, there is also lag that occurs when the work the engine has to do causes some of the ai IQ raising activities to fall behind the action resulting in dumb ai.  The problem is not that the word lag is being used incorrectly, it is that on its own it is not sufficiently descriptive of what is happening, so we should be talking about network lag, video lag and possibly even ai IQ lag.
Title: Re:Speeding scripts up
Post by: Chris Death on 03 Oct 2005, 13:39:54
Quote
I don't understand how your four liner helps - I don't want it to exit.

OK, but the fourliner is doing exactly the same as your @ example.

Your if opposite than @ then goto doitagain might be much more work for the processor to be understood than the condition you were using in your @ statement.

Your if statement might look tricky but it's hell of a lot more complicated than the @ condition you were using  - multiplied by 5000 this will make a big difference.

that's my point here

Quote
Your comment that 5000 scripts will make the @ lose efficiency is interesting.  The @ performed very well in the test it was the loop that died.

OK, that's now my bad English (not my native language), therefore i'll try to explain it again on another way:

The @ command will check the condition to be met for every available frame (so SUMA from BIS said).

This means if you have 60 frames per second the @ command
will check 60 times for the condition in a second.

Now if you run 5000 scripts at the same time, the frame rate will
drop down - tho i'm not sure if it's meant to be the same frame
rate we're talking about when making performance tests
with some frame rate testing tools.

With @ loose efficiency i meant that the check-interval for the
condition to be met will change according to the decreasing frames
available, while 0.5 seconds will always be 0.5 seconds.

- btw i found another word for LAG which i forgot at the end
of my last post: delay ;D

~S~ CD
Title: Re:Speeding scripts up
Post by: Nemesis6 on 08 Oct 2005, 18:34:49
I work a lot with particle scripts in OFP, so how's the "speeding" on the attached script? Am I getting it right? I'm thinking of reworking the FlashFX scripts with the techniques shown here... that is, of course, if I've gotten it right.
Title: Re:Speeding scripts up
Post by: Mikero on 10 Oct 2005, 18:59:09
@Nemesis

Yep, you've got it right with the possible exception of the #loop scriot. If you consider the #loop to be where most of the time will be spent in this .sqs, as in the explosion stuff is realy just more intiatilase and wont be repeated, then the #loop, should be at the top.

@others

line numbers aren't used by the engine.

meat cleavering Lag to mean only internet lag is silly. Lag existed long before the internet was ever thought of. The generally accepted sense of it here is some performance hit on the video display making a timing constraint between mouse acti0ons and what-you-see, but I seriously doubt anyone has any difficulty understanding lag in the context it is mentioned.


Title: Re:Speeding scripts up
Post by: Mr.Peanut on 14 Oct 2005, 19:20:02
So what is the minimum ~delay to use for a loop before it makes sense to use @ instead?
Title: Re:Speeding scripts up
Post by: THobson on 14 Oct 2005, 20:24:21
It really depends on the complexity of the condition

@ simpleCondition

will perform much better than

@ complexCondtion

You really need to test it.  I found that

@ (getDammage unit > xx)

was much better the a loop with a ~0.5 delay, but if was:

@ ((getDammage unit > xx) and (some expression) or (someother expression))

I would expect the @ to take a great hit than a loop

Title: Re:Speeding scripts up
Post by: Dinger on 15 Oct 2005, 22:03:46
Mikero -- find me cases of "Lag" being used to refer to FPS chunders before Multiplayer Internet "Lag" showed up. I'm sorry, but logically, an FPS chunder has nothing to do with Lag. You get controller Lag, when a system is swamped so that response times slow down, but that's different from "Lag" as FPS problems. Precise terminology makes for clear discussions, where the problems and the underlying causes are clear. Otherwise we chase each other in circles.
Title: Re:Speeding scripts up
Post by: Nemesis6 on 18 Oct 2005, 01:51:33
What about scripts that don't have any loops or #s? Same deal with them or is it unneccesary at that point?
Title: Re:Speeding scripts up
Post by: Mikero on 18 Oct 2005, 03:33:49
Unecessary. The priniciple is, anything that would cause the engine to scan the script again from the beginning promotes lag. #labels are prime candidates and in your case
<start of file>
goto to init

actually causes the engine to re-read that line, only to find it wasn't a label !!
Title: Re:Speeding scripts up
Post by: benreeper on 03 Dec 2005, 17:41:37
In a huge "if-then" switch block, is it quicker not to use a "goto" as break .

E.G.

this:
if (boola) then {do this; goto "end"}
if (boolb) then {do this; goto "end"}
if (boolc) then {do this; goto "end"}

as opposed
to this:
if (boola) then {do this}
if (boolb) then {do this}
if (boolc) then {do this}

--Ben
Title: Re:Speeding scripts up
Post by: Mikero on 04 Dec 2005, 06:53:28
>in a huge if then

would you believe, that's perverse! The SMALLER the if switch block, the better it would be to NOT use a goto.

the larger the switch testing, the more time spent decoding the if (bool). The reason is the # moniker on the label, the engine iterates the string table quickly looking for that unique identifier in first char position of any 'line'. But as a  principle, you're right, because this switch block could be buried very deep down a 1000 line script, for the 10 odd if's (a very large switch), i reckon testing bools and failing would be faster than scanning from the top.


Title: Re:Speeding scripts up
Post by: benreeper on 04 Dec 2005, 17:25:11
Gotcha.
--Ben
Title: Re:Speeding scripts up
Post by: hardrock on 09 Dec 2005, 12:27:51
I just investigated a bit further for the question "@ vs. loop"

For that I used a simple boolean as condition, and two different scripts to check. One checked the condition using @COND, the other one with a loop with 0.5 seconds delay and an if statemtent (?!COND : goto ...).

Each test run first started the tested script 5000 times in a while loop (to have them all start at once), waited a random number and changed the condition. I measured the time it took to recognise the condition for each single script and took the average of it. And to make even more sure, I did this 3 times and took the average of the 3 average values.

Here are my test results:


using loops with a delay of 0.5 seconds

• fps
Normal (Desert Island): 54
Script startup: > 16
Scripts running: ~42

• activation time
The average time for the 0.5 sec. loop to recognise the changed condition was 0.212, i.e. approximately the half.


using loops with a delay of 0.25 seconds

• fps
Normal (Desert Island): 54
Script startup: > 16
Scripts running: ~27

• activation time
The average time for the 0.25 sec. loop to recognise the changed condition was 0.091


using loops with a delay of 0.1 seconds

• fps
Normal (Desert Island): 54
Script startup: > 16
Scripts running: ~6

• activation time
The average time for the 0.1 sec. loop to recognise the changed condition was 0


using @

• fps
Normal (Desert Island): 54
Script startup: > 16
Scripts running: ~7

• activation time
The average time for the @ command to recognise the changed condition was exactly 0.


Conclusion:

'@' is equal to a loop with a delay of 0.1 seconds. They both eat a lot of performance, but are the absolutely fastest to get activated. So if something needs to be very precise, the best way is to use @. Avoid it in scripts running parallelly though.

Everything else is better put into loops of delays of 0.25 seconds or bigger (every tenth second more means increase of performance), even if it's the simplest condition, if you run the script over a long time or multiple of the same script parallelly. In the latter case it would even be good to randomise the delay first, keeping a certain minimum.

e.g.:
Code: [Select]
;;; ~ 0.6
_r = 0.5 + (random 0.2)
~_r

Avoid writing ~(random 0.6), as this may have the effect, that random 0.6 is calculated every frame. I am not sure about this though.
Title: Re:Speeding scripts up
Post by: Flauta on 16 Dec 2005, 08:24:07
mh.. sorry i havent finsish reading the thread.. but what happens  if we have for exaple 3 scripts:

Code: [Select]
//conditional.sqs
[] exec condA.sqs
[] exec condB.sqs
? condA : activationA = true
? condB : ActivationB = true

exit
Code: [Select]
//condA.sqs
@ ActivationA
xxxxx
xxxxx
xxxxx

exit
Code: [Select]
//condB.sqs
@ ActivationB
xxxxx
xxxxx
xxxxx

exit
this must be obiusly more coplex.. if it is simple like that, there are better ways to Script it...

but isn't this a "fast" alternative way?!?!?!


EDIT: (some "toEnglish" traslation threads fixed..)
Title: Re:Speeding scripts up
Post by: hardrock on 16 Dec 2005, 12:28:21
Well, you better just write

Quote
//conditional.sqs
? condA : [] exec "condA.sqs"
? condB : [] exec "condB.sqs"

exit

in your example, as it's got the same effects without using global variables.
Title: Re:Speeding scripts up
Post by: Flauta on 16 Dec 2005, 18:41:11
but whit my way to make te script.. you are loading woole scripts from the begining.. so te CPU "reads" the scripts once... not everytime it neeeds to execute one..  and note that I tell that is more complex... I thought on that too!! ;D

maybe is faster.. maybe is slower because the both @... that is what im asking.. how faster is it than the hardrock's way?
Title: Re:Speeding scripts up
Post by: hardrock on 16 Dec 2005, 22:36:57
maybe is faster.. maybe is slower because the both @... that is what im asking.. how faster is it than the hardrock's way?
I think it's better to load the scripts when needed. Looking at my test results above, you can see that the @ command indeed needs a lot of performance, and I think loading one script is less work for the cpu than checking ten conditions in ten waiting scripts every frame.

But you're mentioning an interesting point. Loading a script is indeed a lot of work too, and above it was stated that a script isn't deleted from memory unless you use the exit command in it.

So, at least for non-looping scripts which have to be called several times in the mission, there'd be the way of preloading them via init.sqs.

You'd need the line
Code: [Select]
? (time<1) : goto "end"and the label #end on the end of every script. I know that this isn't that well for the CPU as the engine has to search the label "end" from the beginning of the script, but let's just assume that doesn't matter in the first second of the mission. After that label, their should be the end of the script, plain without exit command.

Then you'd write in your init.sqs
Code: [Select]
;; load scripts
[] exec ...
[] exec ...

@(time>1)

;; do the rest

This way every non-looping script would be preloaded in the first second of the mission and would be available for the rest of the mission without the need to load it.

That's just hypothesis and I never tested it, but it would be interesting what you think about it.
Title: Re:Speeding scripts up
Post by: Mikero on 17 Dec 2005, 00:58:59
>and above it was stated that a script isn't deleted from memory unless you use the exit command in it.

or by default. an exit command is implied at end of any script.

a script only stays resident in memory if there's a permanent loop in it.

thus

exec= anything

achieves nothing speed wise in terms of holding that script in memory if 'anything.sqs' doesn't loop, permanently.

you can very easily validate this assertion by

create a call to a non looping anything.sqs in an init.sqs (eg)

have the same animal called by (say) a radio trigger.

alter the text in anything.sqs while the game is running.

the text will change in a non-looping script, but not in a 'resident' one.
Title: Re:Speeding scripts up
Post by: PheliCan on 18 Dec 2005, 12:40:42
Very interesting diskussion! Just one question to make things clear...
Which one is better:

Code: [Select]
_i = 0
#arrayLoop
_current = _array select _i
do some stuff...
_i = _i + 1
~0.01
?(_i < count _array) : goto "arrayLoop"

or this one

Code: [Select]
{ [_x] call do_some_stuff_function, ~0.01 } forEach _array

This variant would also be interesting:

Code: [Select]
_i = 0
#whileLoop
do some stuff...
_i = _i + 1
~0.01
?(_i < 20) : goto "whileLoop"

or this one

Code: [Select]
_i = 0
while "_i < 20" do { [_i] call do_some_stuff_function, _i = _i + 1, ~0.01 }

The reason for using a functon within the forEach- and while-commands are only to be able to use new lines (to better organize the scripts). Would the forEach- and while-commands solve the problem with re-reading the files (from memory)?
Title: Re:Speeding scripts up
Post by: THobson on 18 Dec 2005, 13:07:33
It depends what you mean by better.   The first is a loop involving several lines, the OFP engine can step through those lines doing other things as well in between each line.  The single line forEach (or while do) instruction will be executed faster, but will therefore prevent the engine from spending time doing other stuff.  So it really depends what you want to do.  Is it essential that all these things get done quickly or that the engine is freed up to do other stuff while it works on it.

In any event _array needs to be pretty big or the stuff you are doing to the elements of the array needs to be pretty significant for you to notice much differenece

Other points:

- you don't need the  0.01 wait in the loop
- the 0.01 wait will not work within a block of code delineated by {}
Title: Re:Speeding scripts up
Post by: PheliCan on 18 Dec 2005, 15:45:30
- the 0.01 wait will not work within a block of code delineated by {}

That pretty much sais it all. The point of the pause was to give room for other tasks. Ohh, well...
Title: Re: Speeding scripts up
Post by: hardrock on 26 May 2006, 11:02:31
The difference is that code blocks à foreach or while are executed within a single frame, just as are functions. All those blocks theoretically represent a line of a script, and OFP always parses one line at once.

So OFP takes the same time for executing
Code: [Select]
myVar = 1as for
Code: [Select]
myVar = 1; mySecondVar = 2; myThirdVar = 3or for
Code: [Select]
{myArray = myArray + [_x]} forEach [1,2,3,4,5]
Of course, the longer a line, the more lag will appear ingame on the long run. But if used wisely and sparely, you can get pretty good results.