Minor cock-ups, such as an Objective not ticking, will affect the score slightly and get mentioned in the review providing they do not have serious effects.
Major cock-ups result in an IM to the author describing the problem and giving him two weeks to fix it. If he does, well and good. If not, the mission is deleted. (We're not harsh though, if he replies and says can I have a bit longer that's fine.) Occasionally, for one reason or another, such a mission gets through and achieves a score of zero.
If it's really bad you might suggest to an author that he put it on the beta board and spend some time testing before resubmission. However, reviewers are not beta testers. Although most reviewers do look at a mission in the Mission Editor (I certainly do, not that I've done many reviews) they are not obliged to go hunting for faults.
In the first example you quote I would send the author a message saying the mission is non-functional please fix it. (The mission is obviously faulty so there is no need to unpbo at this stage.) Say here are a few tips on what I think might be wrong, here's a few tips on where to get information on where to fix it, and maybe even here's a few tips on how to make the whole thing less crap more fun. But from the sound of it he's staring down the barrel of a 2 or 3 anyway. Mission making is hard though, and I'm a firm believer in giving noobs as much encouragement as possible. Sometimes they come up trumps in the end.
The second example is trickier. This is one I would open in the Mission Editor right now and see what's going to happen. You can then take a view on how to proceed. It sounds as if it's just crap rather that faulty, which just means a poor mark.
Crappy missions like these are a real problem for the community. They waste a lot of good people's time. and create disappointment, frustration and bad feelings all over the place. As you probably know we're trying to encourage more people to put their missions on the beta testing board and then beta test other people's missions. Most people who make crappy missions do so because they don't know any better: if they are nudged into a journey of discovery it a) keeps them off our back and b) means that the next mission/version will be much better.
Like many people I've played a lot of missions, many of them as a beta tester, reviewer or judge. And in general, having seen the download blurb, readme, overview, intro, briefing and first three minutes of the mission, you know roughly what standard it's going to be. There are exceptions but they constitute well under 10%. The trick with crappy ones is to cut your losses and finish the whole procedure as quickly as possible, but without being unfair to the author in case it is one of the few. Unfortunately it's not an easy trick to carry off. You have my sympathies.
To answer the general point of how far to go, as a rule it is not sufficient simply to play a mission once. Apart from anything else you don't know what you've missed. I tend to play it once and then look in the mission editor. I'll play parts of it again, watch the other outro if there is one and discover how it occurs, run around bases and investigate interesting looking corners. Alternatively I may play through it again in a different way with setCaptive true or with some other cheat. I'm sure other reviewers have their own techniques but I know that they all do something.
For a very simple mission this may may mean one play through plus five minutes. (Plus thinking about marks and writing the review of course.) However, for a complex mission it may mean spending three or four times the normal mission time. If it's complex-good that is merely a pleasure. If it's complex-crap it can be really hard work ... but that is the cross the mission reviewer has to bear, and is partly why so few people have what it takes to be a mission reviewer.