Jump to content

CMAK Imminent - can we fix the Scenario Depot Rating System Beforehand?


Recommended Posts

  • Replies 198
  • Created
  • Last Reply

Top Posters In This Topic

Originally posted by WWB:

Good suggestions GaJ, but one part is fundamentally flawed. Back in younger and freer times I did a bit of creative writing. My professor drilled one point home--author's intentions do not matter once a work has been made public.

There is no way an author should be allowed to "weight" reviews of a given battle. No player knows the author's intentions when opening a battle, aside from what should be made clear in the briefings. If an author makes a battle only suitable for AI play, but expects people to divine that, he should be penalized, rather than allowed to jimmy things around to hide his error.

WWB

No, no, he weights them BEFORE the reviews are made, if I am reading this right. IE - you make a scenario that is intended to be unbalanced, so you weight that factor at 20%, say. But it is designed to be PBEM so you weight that at 100% and weight "playable vs AI" as 0%

Correct me if I'm wrong, but I think that was Green as Jade's intent; also correct me if I"m wrong but it appears you've read it as weighting things after the reviews are in?

Link to post
Share on other sites
Originally posted by Michael Dorosh:

</font><blockquote>quote:</font><hr />Originally posted by WWB:

Good suggestions GaJ, but one part is fundamentally flawed. Back in younger and freer times I did a bit of creative writing. My professor drilled one point home--author's intentions do not matter once a work has been made public.

There is no way an author should be allowed to "weight" reviews of a given battle. No player knows the author's intentions when opening a battle, aside from what should be made clear in the briefings. If an author makes a battle only suitable for AI play, but expects people to divine that, he should be penalized, rather than allowed to jimmy things around to hide his error.

WWB

No, no, he weights them BEFORE the reviews are made, if I am reading this right. IE - you make a scenario that is intended to be unbalanced, so you weight that factor at 20%, say. But it is designed to be PBEM so you weight that at 100% and weight "playable vs AI" as 0%

Correct me if I'm wrong, but I think that was Green as Jade's intent; also correct me if I"m wrong but it appears you've read it as weighting things after the reviews are in? </font>

Link to post
Share on other sites

I hope that my suggestion of the possibility retrofitting scenario ratings from the designer to existing scenarios won't detract from sensible discussion of using the proposal for going forwards. They are two separate possibilities.

Supposing people like the suggestion going forwards, there will be an aspect of retrofitting that I am aware will need debate: whether authors will retrofit weights towards existing good scores.

I imagine this possibility would initially cause great concern, but I don't think its warranted.

If someone rates their own scenario as 10 in "Play against AI" and one or zero for all the others, to target some good ratings people gave them, they have to realise that they will be telling potential users that they believe their scenario is worthless in the other categories! That, I think, is the beauty of the proposal: the weighting is also a statement of intent and self assessment by the designer.

Let me say again, though: how about deciding if this makes sense for a system going forwards first, then decide if retrofit makes sense second...

GaJ.

Link to post
Share on other sites
Originally posted by GreenAsJade:

I hope that my suggestion of the possibility retrofitting scenario ratings from the designer to existing scenarios won't detract from sensible discussion of using the proposal for going forwards. They are two separate possibilities.

Supposing people like the suggestion going forwards, there will be an aspect of retrofitting that I am aware will need debate: whether authors will retrofit weights towards existing good scores.

I imagine this possibility would initially cause great concern, but I don't think its warranted.

If someone rates their own scenario as 10 in "Play against AI" and one or zero for all the others, to target some good ratings people gave them, they have to realise that they will be telling potential users that they believe their scenario is worthless in the other categories! That, I think, is the beauty of the proposal: the weighting is also a statement of intent and self assessment by the designer.

Let me say again, though: how about deciding if this makes sense for a system going forwards first, then decide if retrofit makes sense second...

GaJ.

I think my entire point was missed. Basically it is that the work must stand on its head in and of itself. The author should get no chance to say "what I really meant for you to this is X." If he really meant for someone to think that, they should make sure that effect is in the scenario file, not after the fact at a website. An author's intent is irrelevent. The effects created by the work are relevent.

As for retrofitting ratings, it actually would not be too hard to do with the new system. Just take the averaged rating, divide by two and round to the nearest integer, presuming we are going with the 1-5 scale. The issue will be none of the old ratings have registrations attached, and it is likely too much effort to get them attached, so they will be less valid to a large extent.

WWB

Link to post
Share on other sites

WWB,

I think they are saying that you put a rating on your own scenario at the time you upload it. Then after the reviews start coming in that could reflect in the rating. No one is trying to say that the review shouldn't be accurate or even done. But that you should have some say in the results.

For instance. You make a scenario that is for two player but I play it against the AI and rate you way low for that. What I believe Green as Jade would have us put in place would allow you to basically throw that review out or at the least to downplay the bad review. A review that you got erroneously because I rated you down for something that you didn't intend I do in the first place. Please correct me where I fail to accurately portray the intents here.

Or I don't like all armor battles, don't like night fights, don't like small-huge, or whatever, and I give you the review on something other than what your scenario is about. Or maybe, that was exactly what it was about and I missed the point.

I would welcome such a balancing of the intent so to speak. I'm sure that any designer that has gotten these reviews, that seem to come from God knows where, would welcome a chance to level out the playing field a bit.

You may have even gotten some of them yourself. If you have this would be a way for you to have your say as the designer. I have found though that you can do about the same thing if you respond to a review. Even a bad one gives you the chance to have your say and defend your work.

Panther Commander

Link to post
Share on other sites
Originally posted by WWB:

]I think my entire point was missed. Basically it is that the work must stand on its head in and of itself. The author should get no chance to say "what I really meant for you to this is X." If he really meant for someone to think that, they should make sure that effect is in the scenario file, not after the fact at a website. An author's intent is irrelevent. The effects created by the work are relevent.

This is nonsensical. If I design a scenario specifically for PBEM play, the game is playtested exlusively by PBEM play and eventually balanced for PBEM play, how exactly does this become apparent by the scenario file? That is impossible, since the only way to make this apparent is to say so in the briefing. What then, short of saying up front "this is for PBEM play", would stop someone from playing this scenario vs. the AI, finding it unsuitable, and then marking it down via whatever review process we end up with?

You say that the effect should be in the scenario file. This is clearly impossible.

Unless of course we're still missing your point; if so, I say quite honestly it hasn't been clearly presented and would invite you to try again(?)

Perhaps you can give us an example of what you mean by "effect" and how this is made obvious in a scenario design?

Link to post
Share on other sites
Originally posted by Michael Dorosh:

</font><blockquote>quote:</font><hr />Originally posted by WWB:

]I think my entire point was missed. Basically it is that the work must stand on its head in and of itself. The author should get no chance to say "what I really meant for you to this is X." If he really meant for someone to think that, they should make sure that effect is in the scenario file, not after the fact at a website. An author's intent is irrelevent. The effects created by the work are relevent.

This is nonsensical. If I design a scenario specifically for PBEM play, the game is playtested exlusively by PBEM play and eventually balanced for PBEM play, how exactly does this become apparent by the scenario file? That is impossible, since the only way to make this apparent is to say so in the briefing. What then, short of saying up front "this is for PBEM play", would stop someone from playing this scenario vs. the AI, finding it unsuitable, and then marking it down via whatever review process we end up with?

You say that the effect should be in the scenario file. This is clearly impossible.

Unless of course we're still missing your point; if so, I say quite honestly it hasn't been clearly presented and would invite you to try again(?)

Perhaps you can give us an example of what you mean by "effect" and how this is made obvious in a scenario design? </font>

Link to post
Share on other sites
Originally posted by WWB:

</font><blockquote>quote:</font><hr />Originally posted by Michael Dorosh:

I think you just stated the answer. If a scenario is supposed to be played PBEM, make it clear. If it is for AI only make it clear.

WWB </font>

And what I am saying is

1) The mechanism to make it crystal clear is by

having the designer supply the weighting

for each aspect

If your scenario is for PBEM, put a weight

of 10 on PBEM when you submit it.

People can then look for scenarios that

are PBEM 10 and know they are getting

scenarios designed for what they want to do.

and

2) Having this intent cause reviews relevant

to that intent to have higher weighting in the

overall score than reviews that target

non-relevant aspects.

And do this using a simple, elegant mechanism.

If the designer weighted PBEM 10 and

"vs AI" 0, and the reviewer gave "vs AI" 3

then

1) A reader knows the scenario sucks vs AI

2) The designer is not offended by this rating because

3) The overall rating is not pulled down at all by the "3".

GaJ.

Link to post
Share on other sites
Originally posted by WWB:

Good suggestions GaJ, but one part is fundamentally flawed. Back in younger and freer times I did a bit of creative writing. My professor drilled one point home--author's intentions do not matter once a work has been made public.

There is no way an author should be allowed to "weight" reviews of a given battle. No player knows the author's intentions when opening a battle, aside from what should be made clear in the briefings. If an author makes a battle only suitable for AI play, but expects people to divine that, he should be penalized, rather than allowed to jimmy things around to hide his error.

WWB is absolutely correct. Example: Winter Wonderland was a scenario I did that needed to be played a specific way. I thought it was obvious. No one else did. My fault, not theirs.
Link to post
Share on other sites

There is another alternative. If certain designers don't want to use the designer ratings, they wouldn't have too.

It could be offered as a benefit for those that do. I don't see where it would have to be something, EVERYBODY used, for it to be a useful tool. Obviously, not everyone who downloads from the SD uses the review tool, or there would be many more reviews. The same could be said for the designer ratings.

Panther Commander

Link to post
Share on other sites
Originally posted by GreenAsJade:

</font><blockquote>quote:</font><hr />Originally posted by WWB:

</font><blockquote>quote:</font><hr />Originally posted by Michael Dorosh:

I think you just stated the answer. If a scenario is supposed to be played PBEM, make it clear. If it is for AI only make it clear.

WWB </font>

And what I am saying is

1) The mechanism to make it crystal clear is by

having the designer supply the weighting

for each aspect

If your scenario is for PBEM, put a weight

of 10 on PBEM when you submit it.

People can then look for scenarios that

are PBEM 10 and know they are getting

scenarios designed for what they want to do.

and

2) Having this intent cause reviews relevant

to that intent to have higher weighting in the

overall score than reviews that target

non-relevant aspects.

And do this using a simple, elegant mechanism.

If the designer weighted PBEM 10 and

"vs AI" 0, and the reviewer gave "vs AI" 3

then

1) A reader knows the scenario sucks vs AI

2) The designer is not offended by this rating because

3) The overall rating is not pulled down at all by the "3".

GaJ. </font>

Link to post
Share on other sites
Originally posted by WWB:

A few issues here. First, put yourself in a player's seat. They download a scenario on 1 October. They sit down and start playing it on 15 November. Do you really think they are going to remember that the author marked it as a 3 for Play vs the AI?

Why would he have to remember anything, Wyatt? The weight is entered via computer - No matter what value the reviewer enters for playability vs AI, it will be weighted by the computer automatically according to the value set by the designer. So if it turns out the player didn't remember it was intended for play vs the AI (and why wouldn't he read that info in the BRIEFING, usually done about 10 seconds before SET UP) he could still rate it with that criteria in mind, and the overall review would not be made to suffer for it, becuase the designer already weighted that category appropriately.

Unless they are an odd sort of person and have a photographic memory, the will not. All they have to go on is the instructions contained within the briefing, and those should not how the scenario is intented to be played if that is an issue.
I think you left a word or two out of this sentence - did you mean to say that scenario briefings should not tell people the intended method of play (or in other words, the method for which the scenario was designed)? You keep saying that whether or not a scenario is intended for PBEM or vs. AI should be obvious by the design of the scenario only - I hope I am misunderstanding that, because it would be folly to expect a player to pick up any scenario, play it, and then deduce correctly afterwards that it was meant for head to head play based solely on the map, VC, and forces. Surely this isn't what you are suggesting, because this is how I am reading it.

Now, this is not to say people will always listen to that. Look up Katukov Strikes Back in the depot. And look at the briefing. People have played it multiplayer even though there is a big note in the briefing stating: "Do not play this scenario 2 Player." But, if we have the checkboxes, and possibly the categories, we can see how the reviewer played the scenario, without fudging around with the overall rating.
No one is fudging anything; the intent is to prevent buddy from rating Katukov Strikes Back on how well it plays head to head. The intent being that the designer wants people to play it against the AI. The designer hopes the scenario is enjoyable. He wants others to play it. He doesn't want someone coming along, playing it head to head, and trashing it based on his experience which was outside the designer's intent, thereby dissuading others from downloading and enjoying it as it was meant to be played.

This isn't unreasonable, either; anyone who takes the time to understand the difference between play vs. the AI and play head to head should be rewarded for taking the time to balance a scenario for those specific conditions - most scenarios need to be played either one way or the other - and not punished for taking the extra time to ensure the scenario performs well.

Also, I am a firm believer and criticism. Any creative work, no matter how brilliant, will have its detractors. If you do not want your work criticized, maybe you should not put it out in an open place designed to gather critiques. Every bad review is not "trolling." It might well be deserved.
Obviously correct; I think designers would prefer that bad reviews be based on hard factors rather than subjective factors, but this is not always possible - sometimes you just think something sucks.

Setting things up to someone avoid taking bad reviews is a perversion of any system of making things avaliable and reviewable.
You're completely wrong here and suggesting that weighting is simply designers trying to worm out of bad reviews is actually being a little insulting; the intent is not to avoid bad reviews, the intent is to avoid unfair reviews. I know that you know the difference. And hey, weighting affects good scores too. If you weight "playable vs. AI" as 0 and I find out that it is a great scenario against the AI and give you 10 out of 10, or 5 Green Clovers or whatever system is adopted, the overall rating is still going to be as unaffected as if I gave you a 1 out of 10 or 1 Blue Moon.

We dont need to replace a slightly too complex, easily abusable system with an even more complex, designed to be abused system.
The design suggestion is to avoid abuse, not encourage it.

As for simplicity - as a designer, I've only been led to change any of my scenarios - even the bad ones - by perhaps 10 or 20 percent of the people who have reviewed them. The majority of reviews were positive and didn't suggest changes; a minority were bad reviews and also didn't suggest changes, a very tiny majority of both good and bad reviews did suggest changes which I later implemented (some of the 2nd versions made it onto the special edition disc). From a pure designer's standpoint, if you increased the complexity of the reviews and drove away 80 percent of the people doing reviews now, it would be something I could live with. Oh, my ego would take a hit, sure, because I do enjoy getting the "wow cool battle" reviews as much as anybody else, but I am sure other players looking to download something they are sure to like are not well served (see below).

As for simplicity from a player's standpoint - Ialso download games - the majority of PBEM games I've done have been from stuff downloaded from the depot, and always after a search through the reviews (though I hate reading spoilers beforehand, preferring to play blind). A more sophisticated rating system would be handy here, also. Again, as a player looking to download something fun, I got no benefit from 1 line reviews saying 'wow, really cool battle'. At that point, I was looking more at size, battle type, date, force composition - all stuff you don't need a scenario depot for. If someone wrote a really well worded review that gave me a reasonably detailed explanation of why they liked the game - and even better, if there were 2 or 3 reviews like that for the same scenario - it went on my list. If someone takes the time to give a detailed review rather than a one liner, the fact that someone took the time to talk about this scenario suggested to me it was worthier of a download. In general, I will download a scenario rated 6 that has 2 or 3 really good reviews that describe the scenario in detail (with minimal spoilers) especially if I've seen the reviewers before and trust their judgement, as opposed to a scenario rated 9 with 2 or three "wow really cool" comments attached.

But that's just me.

[ December 06, 2003, 11:51 AM: Message edited by: Michael Dorosh ]

Link to post
Share on other sites
Why would he have to remember anything, Wyatt? The weight is entered via computer - No matter what value the reviewer enters for playability vs AI, it will be weighted by the computer automatically according to the value set by the designer. So if it turns out the player didn't remember it was intended for play vs the AI (and why wouldn't he read that info in the BRIEFING, usually done about 10 seconds before SET UP) he could still rate it with that criteria in mind, and the overall review would not be made to suffer for it, becuase the designer already weighted that category appropriately.
You would be surprised how many people forget, or screw this up. Presumption is the mother of all screwups. The designer knows how his work is to be played, and it is his responsibility to communicate it to the players. Weather or not a player follows said advice is up to the player.

I think you left a word or two out of this sentence - did you mean to say that scenario briefings should not tell people the intended method of play (or in other words, the method for which the scenario was designed)? You keep saying that whether or not a scenario is intended for PBEM or vs. AI should be obvious by the design of the scenario only - I hope I am misunderstanding that, because it would be folly to expect a player to pick up any scenario, play it, and then deduce correctly afterwards that it was meant for head to head play based solely on the map, VC, and forces. Surely this isn't what you are suggesting, because this is how I am reading it.
The 'not' in the sentance in question should be 'note.' Last time I checked scenario files included briefings.

No one is fudging anything; the intent is to prevent buddy from rating Katukov Strikes Back on how well it plays head to head. The intent being that the designer wants people to play it against the AI. The designer hopes the scenario is enjoyable. He wants others to play it. He doesn't want someone coming along, playing it head to head, and trashing it based on his experience which was outside the designer's intent, thereby dissuading others from downloading and enjoying it as it was meant to be played.
So someone plays a scenario the 'wrong way' and trashes it--lets say single player for a multiplayer battle. Potential player sees that, through the checkboxes, a reviewer played that scenario single player and it sucked. They also see the multiple-player reccomendation and are still free to download it.

Obviously correct; I think designers would prefer that bad reviews be based on hard factors rather than subjective factors, but this is not always possible - sometimes you just think something sucks.
A review is, by nature, subjective. Moreover, what exactly are the hard factors. Is there a book of technically perfect scenario design? No. There are a few no-nos, but in general whatever works and is generally fun to play with is acceptable.

You're completely wrong here and suggesting that weighting is simply designers trying to worm out of bad reviews is actually being a little insulting; the intent is not to avoid bad reviews, the intent is to avoid unfair reviews. I know that you know the difference. And hey, weighting affects good scores too. If you weight "playable vs. AI" as 0 and I find out that it is a great scenario against the AI and give you 10 out of 10, or 5 Green Clovers or whatever system is adopted, the overall rating is still going to be as unaffected as if I gave you a 1 out of 10 or 1 Blue Moon.
I think I see our main difference here. Personal attacks aside, I contend that there is no such thing as an unfair review. As the author, you might not agree with a player's assessment, but that does not mean such a review is invalid. Far from it--I learn more from reviews I do not agree with rather than those which say "Bravo, good job!"

The design suggestion is to avoid abuse, not encourage it.
Has it occoured to you that designers are as apt to abuse the system as players are? This system will allow designers to completly limit their exposure to things which they think might be unfavorable.

Especially given this system will be added after the fact. Oooh, scenario X is rated horribly for play vs the AI. Well, I will just rate that as 0 and push it up the list. This is fundamentally wrong. As I have stated earlier, any work released into the public should be weighted on its own merits alone. The author should not, save releasing a revision, have any opportuity to skew how it is seen by the public.

There is one question at the heart of this matter. What purpose should the depot serve?

I think it is a place for players to download scenarios, with some help in picking from players who have played the battles before. It is there to allow designers to get some, limited feedback. Though really most of the feedback to the designer should have happened when one was playtesting the battle, not post-production. It is not a place for designers to inflate their egos nor to make sure their scenarios are rated the best. It most definitely does not exist so authors can ensure that their scenarios are rated the best possible.

I would be interested in hearing what answers others have to the above question. Also, this discussion has been entirely one sided, just consisting of designers. I would be interested in hearing from a player or two about what they would like done. Who knows, many might think it works just fine as it is and that the ratings are perfectly fair.

I would also remind us that our purpose here is to come up with a good, simple, reliable way of rating battles and to help Keith implement it in whatever ways we can. Adding entire layers of complexity to the process does not take us further towards this goal.

WWB

Link to post
Share on other sites
Originally posted by jwxspoon:

For the sake of simplicity and ease of use for all scenario authors and reviewers I hope that the changes made are simple. I like the revision to my original suggestion made by Berli.

jw

As do I. I really don't see why people want to make it more complex, we already get so few reviews as it is already.
Link to post
Share on other sites

I made the suggestion I did because it was

</font>

  • Simple</font>
  • Requires no chance of anything for either player or reviewer</font>
  • Addresses the most problematic issue: adverse scores of irrelevant aspects detracting from the overall rating of a scenario.

    ...while still allowing reviewers to give
    ratings on those irrelevant aspects
    should they feel the need.</font>

Another positive aspect of the suggestion I made is that it can be implemented with low impact on the current reviews, and taken out again without detracting from reviews that are done while it is in place.

Some of the objections from WWB I'm afraid to say indicate that he isn't understanding how the suggestion is intended to work. They are objections to features of the suggestion that don't exist.

However, WWB is entitiled to his opinion. We know what WWB's opinion is now, and there seems to be little prospect of changing it by debate

or further attempts to describe the proposal.

What happens next is up to Admiral Keth I guess... there's little point in three people with fixed opinions (Michael, WWB and me) debating disagreements further.

What would be productive is suggestions for refinements to address shortcomings (if there are any) or alternatives. Or any opinions from anyone else.

Otherwise Admiral needs to decide to do something or just put us out of our misery by telling us he's not going to do anything!

GaJ.

Link to post
Share on other sites
Originally posted by GreenAsJade:

Some of the objections from WWB I'm afraid to say indicate that he isn't understanding how the suggestion is intended to work. They are objections to features of the suggestion that don't exist.

...

Otherwise Admiral needs to decide to do something or just put us out of our misery

Glad it wasn't just me getting that impression.
Link to post
Share on other sites

Fruhlingswind is (was?) OK. Unlike the sterile demo battles in CMBB, it sold me on the new version. And I understand that, given bandwith concerns, the author, Rune, had to shrink the map to uncongenial dimensions. After all, it's just a demo and it was succesful in a promotional way.

My issue is with the 'simulation'aspect. As Rune pointed out, in the real battle, the Germans caught the American tankers having breakfast and began brewing up their tanks at leisure. This is impsossible to duplicate in CM 1,2, or 3.

In Villers-Bocage, a similar situation, the designer had to go into contortions to render this battle comprehensible in CM terms; by, among other things, rating the tank crews Conscript. Now, these were the notorious Deserts Rats of NA fame, Veterans if the appellation has any meaning.

IIRC, Steve has intimated that in the next version weapons crews will be detachable. When that happens, designers can do battles like V-B full justice. Until then, with all the availabe battles out there waiting to be simmed, why bother trying to shoehorn CM into box for which it's not ready?

Link to post
Share on other sites

There were good things about Fru, but my own opinion is that the "suprise" briefing was not one of them.

It can be done well: take a look at the CMBB scenario "Meeting!". Same "lum de dum, we're not expecting trouble" atmpospherics, but a clean transition into "oh, we are under attack".

In addition, in a situation of response to a suprise attack, one would not expect to have the opportunity to set up a defensive position. To match the scenario with the briefing better, a scenario like Fru could have had all the units padlocked in "having morning tea" positions.

That for me would have made the difference between an adequate briefing (which is was) and an inspirational one.

Just my opinion... everyone will have one, they'll all be different ;)

GaJ.

Link to post
Share on other sites

I just want to elaborate on my earlier suggestion which has much in common with Berli's. Why not retain the current structure of the depot and add an additional rating for "enjoyment"? This score would replace the "overall" score and would allow someone to mark down things like replayability while still giving a glowing review and marking high for enjoyment.

Scenarios would then be advertised according to their enjoyment level and if someone was interested, they could then go and see what reviewers opinion of the briefing, etc was.

As to a 5-point vs 10-point marking system, I think either is practical but prefer the latter since it allows for better descrimination between scenarios. There could be reviewing guidelines along the lines of

1-2 Terrible

3-4 Poor

5-6 Average

7-8 Good

9-10 Excellent

while perhaps retaining the 0 for "no opinion". That is pretty straightforward and I think most would follow such guidelines. From the designers point of view, it would really mean something to score a 10!

I think that this might be the most straightforward way for change to be made to the depot with a minimum of fuss for those involved. I also want to note my appreciation of the depot and all who sail her, she has provided many a pleasant hour,....ahem.

John

Link to post
Share on other sites
Originally posted by John O'Reilly:

I just want to elaborate on my earlier suggestion which has much in common with Berli's. Why not retain the current structure of the depot and add an additional rating for "enjoyment"? This score would replace the "overall" score and would allow someone to mark down things like replayability while still giving a glowing review and marking high for enjoyment.

Scenarios would then be advertised according to their enjoyment level and if someone was interested, they could then go and see what reviewers opinion of the briefing, etc was.

As to a 5-point vs 10-point marking system, I think either is practical but prefer the latter since it allows for better descrimination between scenarios. There could be reviewing guidelines along the lines of

1-2 Terrible

3-4 Poor

5-6 Average

7-8 Good

9-10 Excellent

while perhaps retaining the 0 for "no opinion". That is pretty straightforward and I think most would follow such guidelines. From the designers point of view, it would really mean something to score a 10!

I think that this might be the most straightforward way for change to be made to the depot with a minimum of fuss for those involved. I also want to note my appreciation of the depot and all who sail her, she has provided many a pleasant hour,....ahem.

John

This suggestion has no merit. 5 "or" 6 for average? I see no point in assigning two different scores to mean the same thing. That's kind of the problem we have now - you're the reason I started this thread in the first place. My briefing of Regiments Die 10 Times was consistently rated 8, 9, and 10 until you and Frunze came along and rated it down - and apparently for no reason, since no comments were left explaining the rating.

Enjoyment is also subjective. You can type "I enjoyed this a lot" in a text field; I think personally that is more appropriate than assigning a 1 to 10 rating for "enjoyment". The best designed, most balanced scenario in the world can easily become unenjoyable if you start to lose! smile.gif Does that mean the design itself is faulty or that others will not enjoy it?

Link to post
Share on other sites

I have been keeping a close eye on the thread (in my spare few microseconds). I am at once amazed and gratified at the passion being displayed by the participants. However, everyone still seems hung up on developing an aggregating numeric-based system. I am increasingly adverse to the implementation of any kind of system which can be subverted or corrupted. Everyone needs to look at all of the suggestions being made and decide how they themselves could potentially use the system to their advantage. Therein lie the future "The SD is Broken" threads.

I am still leaning towards Jeff Weatherspoon's 1-5 system, along with the series of checkboxes and textual review. I would prefer to see suggestions on how this could be expanded and made more informative or detailed without adding too much more complexity. Anything too complex will:

A) Be more difficult to code,

B) Cause players to not submit reviews, and

C) End up being re-written a year down the road.

Changes that _will_ be made are as follows:

1) Remove the Awards line - Maybe re-implement at a future date with better graphics.

2) Change the location of the download link - Some players have difficuly in locating this link.

3) Add the capability to search for vs. AI, and PBEM battles.

4) Add A PayPal donation link.

Changes that will not be made are any system which attempts to rank scenarios against each other. This is an obviously failed system that, once you think about it, is invalid. Each scenario needs to stand on it's own merit. Ranking a 10,000 point historical scenario against a 500 point fictional Byte Battle just doesn't make sense. Therefore this system is seeing it's last days.

In addition, how do the authors want to handle historical rankings? Simply archive the lot and start fresh? Leave them in place as is and ignore them for future ratings? This aspect needs to be handled in a logical and simple fashion, plus not invalidate the effort everyone has put into placing reviews over the past couple of years.

In summary, the aggregating numeric system is soon to go the way of the Dodo. Let us concentrate our design efforts to a more simple and effective system. Once again, there must be a concensus, first from the authors, and then from the players, on how the system is going to work. This board system has the capability to post polls. Perhaps we can beg/wheedle/cajole MaddMatt into activating that feature for a one-time vote.

Link to post
Share on other sites

Michael,

A 10-point system allows one greater discrimination. Is a map slightly above average but not something you can consider "good"? Fine then it's a 6. Bog standard? It's a 5. Pretty straightforward.

As for my rating, I can only believe you are now being either obtuse or overly sensitive as I already explained this to you in a personal communication. The Russian briefing was functional (noted by the reviewer just before me by the way) and the German one was good. Where does that leave me overall? Well slightly above average which equals a 6.

Most people do not detail the thinking behind the rating system in whatever text they enter, you are a notable exception. I generally tend to comment on what most stood out for me in a scenario. If it was one that I enjoyed or found really challenging then that will tend to get more commentary. What do you expect?

You can equally say that it is your own rating system that has brought this discussion up since it is opposed to mine. Who defined your approach as the appropriate meter stick? You previously wrote that you mark everything on the higher end of things. I find this of little use since if all marks are in the same narrow range they tell me nothing. Hoo-ray for everything.

As for enjoyment being subjective, well this whole system is! We all have our preferences and even for categories like "Map" where one might expect a consensus, there is variability in peoples response. If you look at very popular scenarios then you will see that the majority of comments are glowing, whether they come from winners or losers. The former and latter might disagree on balance (maybe not), which would be reflected in their scores for that particular category but would probably be unanimous if given the chance to specify their overall enjoyment. This great scenario might have very low replayability which will result in either it being marked down or people entering zeros. Why not allow for a low mark in replayability (or another category) and a high mark for the overall quality of the experience?

Hope this is clear,

John

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...