Jump to content

GreenAsJade

Members
  • Posts

    4,877
  • Joined

  • Last visited

Everything posted by GreenAsJade

  1. It doesn't matter how each of them rates it. It's a shame that designers will feel so passionate about any given individual rating (because it is about their work, afterall) when for the purposes of selecting highly recommended sceanrios its all about averages. If you give it 4 and Akula2 gives it 2 then it will show up as 3. That will at least position it and allow someone to start making an assessment. Someone else will read _why_ you gave it 4, and will choose to play it because you made sense to them. They will give it 4. It's average will move up to 3.3 (or whatever). Now it is already rated better than average because at least _some_ people liked it, so it is a more logical choice for someone looking for a scenario than one which no-one liked. It's all about averages. Don't get hung up on individual score. GaJ.
  2. Simple and effective: works for me. (Though I would make a good score higher than a bad one) Or 5 Must Play 4 Highly Recommended 3 Worth Playing 2 You might like it, I didn't. 1 Fatally flawed: don't bother. I thought an example would illustrate best, so I emailed one to Admiral. Then I thought "what the heck why not let everyone pick it to bits. So here it is: Joe the reviewer has posted 3 reviews. The designers of each scenario he reviewed gave him ratings of 4, 5 and 3 (out of 5) respectively as a reviewer. They thought his reviews were pretty good. So his personal reviewer rating is (4+5+3)/3 = 4. Troll the reviewer has posted 2 reviews. The designers of each scenario he reviewed gave him 0, 1 respectively. He was obviously a troll. So his personal reviewer rating is (0 + 1)/2 = 0.5. Joe and Troll each come to review a new scenario. Joe gives the scenario a recommendation of 4 ("highly recommended"). Troll gives the scenario a recommendation of 1 ("don't bother playing it"). The net recommendation for the scenario from these two reviews is the is the average of Joe's recommendation and Troll's recommendation weighted by their reviewer rating . Calculation: [ (Joe rating * Joe Score) + (Troll rating * Troll score) ] / (Joe rating + Troll rating) which is [ (4 * 4) + (1 * 0.5) ] / (4 + 0.5) = 3.7 See how this scenario ends up being nearly highly recommended (3.7 is nearly 4), as rated by Joe, because Joe's recommendation counted 8 times as much as Troll's. But it's not quite a 4 on the off chance that Troll was pointing out a real flaw. Now suppose Joe goes psycho, and starts submitting bad reviews. Before long his own reviewer score average drops to Troll like levels and he can no longer significantly affect the ratings of scenarios. Note how all this happens without a moderator having to be involved. By rating reviewers each time they submit a review, the scenario designers have the power over time to rule out trolls, without individual reviews having to be assessed and argued about. Why I'm putting this forward: 1) It gives a system to let people quickly find highly recommended scenarios. 2) It makes reviewers accountable for their reveiws. 3) It lets questionable reviews be dealt with without a moderator having to put effort into sorting through an argument about individual reviews. GaJ.
  3. Is this something authors really want? Although it is easy to implement, what would the authors want to do with the data? Sure, the authors can get a list of people who downloaded your scenario, but do the authors want to then begin pestering players for reviews? Even if there is simply a list of usernames and download counts ratioed to review counts, that's a feature that might cause many people to stop downloading and playing altogether. This will really have to be justified by the authors and approved by a large number of players prior to implementation. </font>
  4. Actually, this is a regular "complaint" ... "we don't get enough reviews", but I would say 100+ a month is not too shabby...
  5. I guess you'll have to call me stupid. It worked really well for me though: downloading highly recommended scenarios always got a good result, and quickly. It amazes me how much time some people have to go about searching for a good one and reading reviews to make the selection. Good luck to you. I would like to have a quicker way to get to the good ones! Although I think I've reviewed every battle I've played, I have to agree that forcing people to do reviews won't do anyone any good. It will just generate rubbish reviews. I want to read reviews by people who wanted to write them. GaJ.
  6. Sure. If they had had them. A whole what? Michael </font>
  7. This mod increases the range of darkness->lightness for the Snow "Open Ground" tiles (and treed tiles). This improves your ability to see what the shape of the ground is like. The price you pay is that the boundaries between tiles of differing hights is more pronounced. For me, it's well worth while. The picture I have attached shows a scene with and without the mod. Somehow it doesn't do justice to the mod: I think it looks better in real life than in the picture. Even so, I think you will see that the top picture makes the valleys clear, wheras in the bottom (original) they are somewhat harder to spot. When the hills are more subtle the effect is even more useful. (I'm not entirely as happy with the snow ones as the others, because the only thing that contrasts with white is grey, so low areas end up looking very grey. I may improve on that sometime, but in the meanwhile, this is functional. Pretend the sun went behind a cloud...) GaJ (you need to log into cmmods to see this picture...but you're always logged into cmmods, right?) [ October 28, 2004, 04:39 AM: Message edited by: GreenAsJade ]
  8. Back in this thread the posting of High Contrast Sand was kinda announced... the applause was underwhelming, so either it was rubbish or no-one noticed. I have just added Grass to the "high contrast" mod collection, and have Snow, Light Snow and Barbed Wire sitting here waiting to be uploaded when I get inspired.... Each of these mods increases the range of darkness->lightness for the target tiles. This improves your ability to see what the shape of the ground is like. The price you pay is that the boundaries between tiles of differing hights is more pronounced. For me, it's well worth while. GaJ.
  9. With the exception of Steve's the other 4 finalist AARs are darn long. It's taking ages to slog through them and try to gain either entertainment or enlightenment... Sunday might be too soon to have formed an opinion. GaJ.
  10. Anyone other than me think it's harder now to find good scenarios to play? We've heard from people who think it's still fine. What about any others? And what about the detailed suggestion I made for change: nice & simple, and addresses the problems. Any comments on that? What's wrong with what I suggested? GaJ.
  11. 1) Yes, you have to read the reviews, but how many do you want to look at before you find ones saying good things? How did you choose which ones to look at? Before, if you wanted a highly recommended scenario, you could find a set easily, then read a few reviews to finalise your choice. 2) The ratings _are_ comparable. They are just not trying to be scientifically precise. There are some people who can't handle imprecision. Do we have to have no useful ratings at all because of those few people's limitation? 3) Sometimes I will feel like playing a scenario blind. Sometimes I will want to quickly find a food one. With the new arrangements I can still find scenarios blind... but I can no longer quickly find a highly recommended one. Is the problem with the word "difficult"? It's not more "difficult" in that you still just press your browswer button to download. It is, however, far more time consuming to find highly recommended scenarios. I really miss that - it was what made it good before. Anyone else agree, or do y'all wonder what I'm talking about, like Sergei? GaJ.
  12. I tend to settle for more crtpyic comments - that only make sense when you've actually seen the movie. Like "Dang, 50m further east would have been nice"!
  13. 3) gives running commentary of battle, and is not terribly afriad of giving some minor detail away (such as unit quality, i.e. reg. or vet.) in order to comment upon effectiveness of tactics, his or mine. I find this a dilemma all the time. On the one hand its a lot more fun having an exchange of words with the real person you're playing with, on the other hand "spoilers" are in fact usually just part of psych warfare, and it's not really clear whether that's appropriate. I guess a considerate opponent (the best sort) will figure that out with you, either by asking, or judging your response. For example, with one opponent I sent my usual comment at the beginning of the game about how I was feeling about it (scared, as it happened) and he responded with some comment about "ah, psych warefare", which I read to mean "please don't keep that up, it bores me"... so I stopped, and we continued in relative silence. GaJ.
  14. Open Office is available for the Mac, is free, and can to a reasonable job of reading MS Word docuuments. http://www.openoffice.org/product/
  15. I don't doubt that the current system is fine for people who just want to put scenarios _in_. It's just not as good as it used to be for people who want to quickly find good ones to take _out_.
  16. Heck, make it so you have to register to download too, so we can see who's downloading heaps and reviewing little... GaJ.
  17. Adm - thanks so much for listening to our feedback and being prepared to contemplate change yet again! There will be as many different system suggestions as people responding, I'm sure. The issue of the person being reviewedm not liking reviews is not unique. Even EBay has it, and copes, with an (mostly) automated system. Here is my suggestion... 1) Get rid of all existing numeric scales and replace them with one: recommendation for the scenario - a rating of 1 to 5. This is really the most useful thing that you can record a number for - to help users quickly find highly recommended scenarios. This is the thing that has gone away in V2. 2) Have a check box to indicate whether the recommendation is for PBEM or AI, and make both the PBEM and AI ratings of scenarios available. People are looking for either PBEM or AI, not both at the same time (usually). 3) Have a text review section where people are encouraged to discuss map design, play balance, briefing etc, but not give ratings to those. SO: When I review a scenario, I log in, click "PBEM" or "AI", select a number from 1 to 5 as recommendation rating, and optionally provide a text review. The rating is _clearly_ subjective (so we don't have arguments about the details of how to come up with it) and the review process is made very easy for those who want to provide feedback quickly. 4) Make it so that you have to register to review. The _designers_ can give the _reviewers_ a rating. One rating from each designer for each reviewer - a "fairness" number, and some text to explain why. The designers can probably change their rating if things change. Everyone can see a reviewer's ratings, just like EBay. Maybe list the "Top 10 review contributors" and "Top 10 fair reviewers". ...And the average of a reviewer's rating is used to weight their recommendation rating on the overall score for every scenario they review. Ideally, the overall recommendation calculation for each scenario that a reviewer has reviewed would be revised each time that reviewer's rating changes. Hence repeat offenders become less and less significant in ratings, while becoming more and more obvious in their own rating page. This moves the argument away from one bad review ... one bad review: put up with it... if they are really a troll, it will emerge. Whadaya reckon? GaJ. [ October 22, 2004, 04:23 PM: Message edited by: GreenAsJade ]
  18. Me too - as I said at the time. The reason I'm raising it now is that we've had some time to try the new arrangements, and I, for one, find them much less helpful than before. Its still nice to have a place where the scenarios are kept, but it was a way nicer facility before. We've lost ease of use to solve a few arguments between scenario designers about rating systems For me, I'd say "who cares what the system is, just give us a least some quick way to sort out the scenarios that people liked from the ones they didn't"!
  19. Once upon a time you could go to the scenario depot and be assured of finding a quality scenario quickly. If you wanted to experiment with something unknown, you could, but you didn't have to. If you wanted a sure-fire good scenario, you picked on with a high review rating and away you went. Sure, we may not all have agreed whether one scenario was really 0.26 better than another, but who really cared? Now it's a lucky dip. You have to either just take one who's name appeals to you, or trawl through all the reviews trying to make out if it will be OK. So sad [ October 22, 2004, 12:12 AM: Message edited by: GreenAsJade ]
  20. The "compare" feature is just outstanding! It really makes this a great tool for selecting forces. Could the list of units in the "main" part of the page be determined by dropdown menus instead of the click-to-expand links? I'd like to be able to go from "US Support Mortars" to "British Support Mortars" just by changing the country through a dropdown menu, instead of all the way back to the top and down again... GaJ.
  21. At first I couldn't tell if it was an April fools joke. "Charles will be construcing a celestial model to accurately position the stars.". I guess he has to have some eye-candy fun, but I hope borg spotting, command mechanics and TaCAI are getting as much attention! GaJ.
  22. I thought that there was going to be one winner in the end. Is that Walpurgis?
  23. Doh - my mistake - of course an Island was CMBB (not too many islands in the desert). Any Hans scenario is a good bet, and Sullen Well is no exception, though Sullen Well is quirky and for that reason may not appeal to everyone. (I enjoyed it, but my opponent wasn't wild about battling AC's). GaJ.
×
×
  • Create New...