Scoring is not strange, but once upon a time it was unusual. In the bad old days many sites accepted user created missions, but few rated them. Some of these sites no longer exist, but some still do - ofp.info probably being the best example. There, missions are published without comment. ofpec, from its earliest days, set out to be different and to be committted to quality. Therefore all the missions hosted here are functional: if they are accepted with problems, then the problems are mentioned in the review.
In the bad old days you could spend a lot of time d/l missions that didn't work. It's still a major problem for missions submitted to ofpec, which is why we talk about beta testing so much - for a mission to be any good it MUST be tested by third parties. The public do not see much of the hard work done by reviewers: about 1/3 of subitted missions are non-functional or have major errors, and while reviewers are not beta testers, we do give authors the chance to fix problems.
Discriminating? Yes, that's the whole point.
Influencing the mission making process? Yes, that's completely deliberate and has always been an important part of the whole review process. Most rookie mission designers make the same mistakes as their predecessors: the review process is partly designed to help them avoid those errors and to encourage them into good mission design practice. I recently submitted a mission that scored reasonably well: did I deliberatly add features that I knew would attract marks? I sure did. Is the mission better for it? It sure is.
The reviewer is given the right to examine the mission by the mission designer: nobody is obliged to submit their mission to ofpec. (There are plenty of other ways to get it into the public domain.) The reviewer earns the right to review on behalf of ofpec by passing a stringent test: more applicants fail than succeed.
There is not the slightest doubt that the overall score is
always[/b] debatable by one point. Occasionally it's more, but for the vast majority of missions, the vast majority of well informed players would agree with the score +/- 1. Though we've all had the experience of playing an 8 or 9 and thinking, "WTF?". The answer of course is that enjoyability is only one factor amongst many: what we are trying to measure is how good the mission is, not how much it is enjoyed by the reviewer, which would be a much more personal thing.
If you think the score or review is wrong or unfair, have a word with Artak, the Missions Depot Admin. Scores have been changed in the past, though it is exceedingly rare and only shortly after the review has been published.
A great deal of the score is not a matter of opinion or taste: reviewing guidelines are quite well defined. You quote the example of long walks. Well, it's true that ultimately you will lose marks for excessive long walks (they are boring - indubitably a negative in a leisure activity - and you don't need a mission to admire the Malden scenery, you can do that yourself) but a far more important consideration is the context: is the tedium appropriate and is it sufficiently rewarded? Reviewers are well aware of the risks of personal bias and try to avoid it. Obviously they are sometimes unsucessful and sometimes they overcompensate. C'est la vie.
Having said all that, you do have a very important point and it is one that we have discussed in the past. The headline score is given too much significance by many people.
Far more important than the score is the text of the review itself: the score is just a summary. However, if you are not one of these score-obsessed people, you are perfectly at liberty to ignore it.
If you find a low scoring mission that you think is good, fantastic: give it a high user rating, post a Comment on it saying you thought it was good and, if you really like it, advertise it your signature line. However, the plain fact is that most low scoring missions have low scores for a reason. In reviews of such missions you will often find tips and suggestions from the reviewer as to how the mission designer could do better next time.
You know; where's the reviewers "right" to decide for me (or even in the name of the site) which mission is good, and which is bad; I mean, the taste of that reviewer can be a diametrally oposite to mine.
The confusion implicit in this quote is, I suspect, the nub of the whole problem. The reviewer is not looking to see whether a mission is to his taste or not: he is looking to see whether the mission is any good or not. And he is not deciding for you, he is giving you information to help you decide for yourself.
If we remove the headline score, who gains?