Jump to content

WWB

Members
  • Posts

    1,959
  • Joined

  • Last visited

Everything posted by WWB

  1. I ran my tests in Novermber 42 and things worked, but now I think I see the problem. Look in the editor, the units are listed with (radio) in their title but slow as their speed. That slow indicates they are actually wire spotters insofar as the game is concerned. So it is either a mislabling or a missed checkbox. WWB
  2. Seems to be working. Do you have a save game you can send me? Email is in profile. WWB
  3. Actually, after a bit of moaning, the designers learned to love them. Now we can indicate to an attacker where the enemy line *should* be . . . WWB
  4. You would be surprised how many people forget, or screw this up. Presumption is the mother of all screwups. The designer knows how his work is to be played, and it is his responsibility to communicate it to the players. Weather or not a player follows said advice is up to the player. The 'not' in the sentance in question should be 'note.' Last time I checked scenario files included briefings. So someone plays a scenario the 'wrong way' and trashes it--lets say single player for a multiplayer battle. Potential player sees that, through the checkboxes, a reviewer played that scenario single player and it sucked. They also see the multiple-player reccomendation and are still free to download it. A review is, by nature, subjective. Moreover, what exactly are the hard factors. Is there a book of technically perfect scenario design? No. There are a few no-nos, but in general whatever works and is generally fun to play with is acceptable. I think I see our main difference here. Personal attacks aside, I contend that there is no such thing as an unfair review. As the author, you might not agree with a player's assessment, but that does not mean such a review is invalid. Far from it--I learn more from reviews I do not agree with rather than those which say "Bravo, good job!" Has it occoured to you that designers are as apt to abuse the system as players are? This system will allow designers to completly limit their exposure to things which they think might be unfavorable. Especially given this system will be added after the fact. Oooh, scenario X is rated horribly for play vs the AI. Well, I will just rate that as 0 and push it up the list. This is fundamentally wrong. As I have stated earlier, any work released into the public should be weighted on its own merits alone. The author should not, save releasing a revision, have any opportuity to skew how it is seen by the public. There is one question at the heart of this matter. What purpose should the depot serve? I think it is a place for players to download scenarios, with some help in picking from players who have played the battles before. It is there to allow designers to get some, limited feedback. Though really most of the feedback to the designer should have happened when one was playtesting the battle, not post-production. It is not a place for designers to inflate their egos nor to make sure their scenarios are rated the best. It most definitely does not exist so authors can ensure that their scenarios are rated the best possible. I would be interested in hearing what answers others have to the above question. Also, this discussion has been entirely one sided, just consisting of designers. I would be interested in hearing from a player or two about what they would like done. Who knows, many might think it works just fine as it is and that the ratings are perfectly fair. I would also remind us that our purpose here is to come up with a good, simple, reliable way of rating battles and to help Keith implement it in whatever ways we can. Adding entire layers of complexity to the process does not take us further towards this goal. WWB
  5. And what I am saying is 1) The mechanism to make it crystal clear is by having the designer supply the weighting for each aspect If your scenario is for PBEM, put a weight of 10 on PBEM when you submit it. People can then look for scenarios that are PBEM 10 and know they are getting scenarios designed for what they want to do. and 2) Having this intent cause reviews relevant to that intent to have higher weighting in the overall score than reviews that target non-relevant aspects. And do this using a simple, elegant mechanism. If the designer weighted PBEM 10 and "vs AI" 0, and the reviewer gave "vs AI" 3 then 1) A reader knows the scenario sucks vs AI 2) The designer is not offended by this rating because 3) The overall rating is not pulled down at all by the "3". GaJ. </font>
  6. This is nonsensical. If I design a scenario specifically for PBEM play, the game is playtested exlusively by PBEM play and eventually balanced for PBEM play, how exactly does this become apparent by the scenario file? That is impossible, since the only way to make this apparent is to say so in the briefing. What then, short of saying up front "this is for PBEM play", would stop someone from playing this scenario vs. the AI, finding it unsuitable, and then marking it down via whatever review process we end up with? You say that the effect should be in the scenario file. This is clearly impossible. Unless of course we're still missing your point; if so, I say quite honestly it hasn't been clearly presented and would invite you to try again(?) Perhaps you can give us an example of what you mean by "effect" and how this is made obvious in a scenario design? </font>
  7. I think my entire point was missed. Basically it is that the work must stand on its head in and of itself. The author should get no chance to say "what I really meant for you to this is X." If he really meant for someone to think that, they should make sure that effect is in the scenario file, not after the fact at a website. An author's intent is irrelevent. The effects created by the work are relevent. As for retrofitting ratings, it actually would not be too hard to do with the new system. Just take the averaged rating, divide by two and round to the nearest integer, presuming we are going with the 1-5 scale. The issue will be none of the old ratings have registrations attached, and it is likely too much effort to get them attached, so they will be less valid to a large extent. WWB
  8. Good suggestions GaJ, but one part is fundamentally flawed. Back in younger and freer times I did a bit of creative writing. My professor drilled one point home--author's intentions do not matter once a work has been made public. There is no way an author should be allowed to "weight" reviews of a given battle. No player knows the author's intentions when opening a battle, aside from what should be made clear in the briefings. If an author makes a battle only suitable for AI play, but expects people to divine that, he should be penalized, rather than allowed to jimmy things around to hide his error. WWB
  9. PM: Was not too clear above. Basically, if you are involved on a team that did a battle, dont review it. If you playtested a battle, dont review it. If you know the guy, that is your call. There are a pretty limited number of scenario junkies so limiting all connections is a bit of overkill. PC: Getting uppity about a minor typo? And, no I did not get the memo on the demise of the CSDT. I think we agree on benefit, though I think the designer should stick to the author comments (note to Keith: feature to keep), and if the playtesters have a comment they probably should have been made during playtesting. WWB
  10. I think we are arriving at some sort of a consensus here, nonwithstanding 5 page posts about the social effects of cabals. *WWB puts his web developer hat on* I think the favorite reviewer thing is a great idea, but we really need to come up with a set of priorities in this case--Keith does not have an unlimited amount of time for this project. Priorities should be something like this: 1: Adjust ratings system to whatever is agreed upon here (Andreas! Hans! We need your 2c!) 2: Build Regisration Features. Note that making a no-registration site registration based is essentially a rewrite from scratch. There is not a method to say "And you are now blessed with regisration!" and it appears. 3: Develop search and display mechinasims based on new ratings and regisration system. 1 and 2 have to happen. Number 3 has to happen to some extent, but not the same extent as 1 and 2. *WWB puts down developer hat as scenario and scenariocollection objects start floating in his head.* One other more sociological comment on the whole thing. Over at B&T we have had a long-standing policy of NOT revieing one another's work. Our time for feedback is in playtesting, not in pumping things at the depot. Now, we have a reasonably popular website and some 'brand recognition' from whence to push our wares, so getting into the vaunted lists is not as big an issue. But I really think a gentleman's agreement to not have playtesters review battles would be a good thing. I know who is on DK or the CDST and can ignore those reviews as tainted, but the general public does not. What say ye other 'corporate' designer types? WWB
  11. I like it better myself. And that font-size change is a global setting change in IE, even if it is not on your crappy OSS browser. WWB
  12. Berli--I think you just unmuddled my thoughts. Here is my official suggestion: Archive the text portions of the old reviews, and maybe the overall ratings. Go registration required. Make reviews a simple jefe-style thumbs up/thumbs down. But add a few checkboxes for the following: I Played This Scenario: []Allied vs the AI []Axis vs the AI []Multiplayer That way people could search for reviews relevent to how they play the game. WWB
  13. Been meaning to put in my 2c on the subject, but got sidetracked and have needed to collect my thoughts. First and foremost, I wholly support the Admiral Keth and Scenario Depot. Even as it stands now it is a wonderful resource. Without it the trade of making scenarios would be in serious difficulty and the "little guy" without a website would have nowhere to post his work. I also fully undestand the paying clients situation. Designing a database-driven website is not a simple matter of making a few dreamweaver templates and manufacturing pages. Getting all the programming to play well together, especially when dealing with a large body of existing data and accepting updates from the general public is not an easy job. That said, it is pretty clear that time has proven that the ratings on the depot are somewhat inaccurate. I like Jeff's proposal alot. On the other hand, I also like having multiple ratings for different categories. Different players like different things in battles, and I think that reviews should reflect that. There also is the question of how to handle the existing data. This is possibly the most cruical part of this process. If one chooses to just drop or archive it, one has alot more freedom of action. Whereas if one chooses to update it in some form than one is, by definition, somewhat tied to former design decisions. The problems with the current system as I see them are thus: 1) Too much gets lumped into the PBEM/AI playability ratings. Basically, this rating is the bulk of the review. Everything else is pretty much eye candy. This rating covers balance, fun factor, overall design and just about anything else the four remaining categories don't cover. 2) Replayability really should be a boolean value, and not averaged into scores. A scenario either is or is not replayable. Given that, AFAIK, most people will play a battle once, it should not be rated and averaged into the total. 3) As Keith notes, the anonimity lets people bump up their friends, take down their enemies and otherwise be general nuicances. I cannot quite claim to have my thoughts together enough to issue a specific proposal, but I shall discuss the options on the table already. While Grog Dorosh's system covers just about every concievable base, it is a bit ardurous at best. Where there were once five ratings there would be nearly a dozen. That is a bit overblown, not to mention a very large break with the past. People already tend not to review battles with comparatively little to think about. I suspect fewer would put up with filling out more fields. In addition, it should be noted that, no matter how clearly the criteria are laid out, any review is subjective. Also, I think dropping multiplayer ratings is a grave mistake. The people who care enough about a game to go seek out, and review, scenarios are far more likely to play multiplayer. And I know of alot of folks who choose their scenarios based on the multiplayer ratings at the depot. I think the best path lies somewhere in the middle. I do think the idea of making it a registration-based site is the right course for the future. This does present a nasty problem with existing reviews. Once one moves into such a registered system, all the pre-existing, non-registered reviews are by definition invalidated. But it might well be worth it, for it would stife trolls to some extent while also allowing one to do things like search by reviewer. In any case, such a move is alot of coding work, as one is almost starting from scratch, even if one has a very good set of existing data. One other ancilliary, and technical note. One can store the data as numerical, but present text to the end users. This might go a long ways towards normalizing the scores. In any case, I am pondering useful suggestions. WWB PS: Rob, PHP is very easy to learn, especially if you already know javascript. MySQL is just a database, like many others. Would not be a bad skillset to pickup.
  14. Sweet. Good to see you all back in business. WWB
  15. I suspect the site is working fine. Just, after the crash, few CMBO mods have been uploaded. WWB
  16. As stated above, you need to setup port forwarding on your router (sometimes called virtual servers and god knows what else). To do so: 1) Forward port 7023 from your router to the computer you wish to host from. 2) Give your opponent your public IP. Best way to get it is to go to http://www.whatismyip.com/ . 3) For good security practice, remember to close the port after playing the game. WWB
  17. If this was GI Combat, there would be animations of him scarfing down sandwiches, lancing desert sores on his lip, and leaning in real close to look at his maps, too. I still chuckle when I see that newsreel footage of him with the maps - its like he's trying to sniff the British out! </font>
  18. Noise warnings are still there, even if I did not mention them. But the dust is a bit more damatic and topical. I also find dust to be a generally more accurate indicator, especially when it comes to numbers. WWB PS: Tom, a player cannot target the 37mm and 75mm independantly. Which is reasonably accurate as there was only one TC to call out targets and help spot short rounds. The 37mm will fire on targets to the side though. WWB [ October 25, 2003, 02:26 PM: Message edited by: WWB ]
  19. Thanks for sending the stuff out. I did not get a chance to examine it in detail, but I did take a glance and it looks awesome. I hope you were not trying to send it on a dialup connection! WWB PS: If anyone was wondering, the maps look to be from the British Ordinance Survey, very nice set of maps that they are.
  20. CMAK is a bit harder on the video card than CMBB, mainly because of dust and some higher poly-count models. Dust will be especially hard on the high-ram TNT card holders, they never could do transparencies no matter how much vram they had. Still, it ran quite nicely on a Celeron 800mhz w/384mb RAM & a GeForce 2MX, while that celeron was running SQL Server, IIS, Roger Wilco Base Station & MySQL. So, in summary, biggest variable really is the video card. Machine speed does not come into play until one gets to larger battles and even then it is just a matter of a blue bar taking longer or shorter. Best advice is to play the demo, if you dont like how it works, you will not like how CMAK works. WWB PS: And, just for better computing experience in general, I would highly reccomend more RAM. It is dirt cheap these days. [ October 23, 2003, 06:01 PM: Message edited by: WWB ]
  21. Yes, please, in PDF. Email is wwb@3dwargamer.net . WWB
  22. Stay neutral. Live to old age and hang out with my buddy franko. WWB
  23. Comparatively, they still don't have the umph. Understand that I use high end Macs, and I haven't seen the PC that compares in speed when working with large graphics files. Part of that is Windows fault being a major speed bump for PCs. One of the biggest annoyances I have with the PC version is the weird way it blocks the desktop... something it does, if I understand it correctly, because of the way Windows handles seperate document windows. </font>
  24. Make sure caps lock is not down. CM hotkeys are case sensitive. WWB
×
×
  • Create New...