Jump to content

Assaulting anti-Tank/Infantry guns


Recommended Posts

Straw man again. OR can operate over ensembles of conditions as readily as over fixed assumptions. Also, plenty of people predicted the low US losses in Desert Storm with perfect accuracy. "Liberation in less than a week, casualties under 1000", for example.

More, as an answer it is a dodge from the substance of my charge. Which is that there actually are better and worse responses to definite, common tactical problems, and the corps (and army) could easily determine them and teach them. Instead it covers the manual writer's backside with a load of guff about perfection being impossible due to uncertainty, the net result of which is that some inexperienced captain gets to wing it.

It isn't only unwillingness to make a decision under stress and uncertainty, that betrays a lack of moral character. Abdicating the responsibility to teach tactics with actual content, is at least as bad, with less of an excuse.

[ April 21, 2008, 05:33 PM: Message edited by: JasonC ]

Link to comment
Share on other sites

  • Replies 126
  • Created
  • Last Reply

Top Posters In This Topic

Adam - sure MG continuance is CM gamey, but it is an issue with the CM morale module. Rally in the game operates as a recovery per unit time snapback to perfect, but it is slower the farther into suppression states the unit is. That means the net suppression a unit can "shrug off" successfully is not a linear function of the total suppression it receives, but depends on the lumpiness and pattern with which it was delivered.

A big lump up front followed by a lot of little evenly, will keep a unit pinned, because the low early morale state reduces the snap-back rate, and the even little stuff can match that rate and prevent rally out of the achieved suppression level. Unevenly spaced small lumps will instead leave the unit at OK half the time, with occasional "shaken" yellow states that the games has not prevent outgoing fire. In the game, mere MG fire at range and men in trenches, could be continued nearly forever with the likely result a few "alerteds" and an occasional "shaken" lasting less than 10 seconds. That is just the model limits.

In the real world, moral and recovery from fire are less predictable. Probably recovery is considerably faster and easier in the game, too. I mean, you can arrange volumes of fire and rates of delivery in CM, that to a near certainty will leave the target unaffected within a few minutes, when in real life it would probably scare the beejus out of them half the time etc. Just by tuning the level of incoming suppression to the game's model of moral recovery rates. Yes those have some randomness in them. But a large enough sample size of microevents, will produce near predictability of the average.

In real life, you don't need to distinguish between the kinds of fire that can cause a pin and the rate at which men rally etc. MG fire can cause pins in real life, and suppressed defenders have sharply reduced visibility. It is also undoubtedly easier to knock out guns in real life by direct hits - there simply aren't any in the game. On the other hand, firing guns were harder to locate in real life. Etc. It is a mistake to look for more realism from the thing than it can deliver, in each micro process. It is enough if it sets the incentives and tactical relationships about right, overall. But as a player, you have to manipulate it as it is, any not how it arguably ought to be.

Link to comment
Share on other sites

Originally posted by JasonC:

More, as an answer it is a dodge from the substance of my charge. Which is that there actually are better and worse responses to definite, common tactical problems, and the corps (and army) could easily determine them and teach them. Instead it covers the manual writer's backside with a load of guff about perfection being impossible due to uncertainty, the net result of which is that some inexperienced captain gets to wing it.

It isn't only unwillingness to make a decision under stress and uncertainty, that betrays a lack of moral character. Abdicating the responsibility to teach tactics with actual content, is at least as bad, with less of an excuse.

tactics without actual content is hardly the problem. according to studies, the problem is that commanders tend to be rigid attritionists who tend to order frontal attacks based on the dated data of their CES. thus maneuverist theories like "recon-pull".
Link to comment
Share on other sites

Originally posted by JasonC:

Straw man again. OR can operate over ensembles of conditions as readily as over fixed assumptions. Also, plenty of people predicted the low US losses in Desert Storm with perfect accuracy. "Liberation in less than a week, casualties under 1000", for example.

More, as an answer it is a dodge from the substance of my charge. Which is that there actually are better and worse responses to definite, common tactical problems, and the corps (and army) could easily determine them and teach them. Instead it covers the manual writer's backside with a load of guff about perfection being impossible due to uncertainty, the net result of which is that some inexperienced captain gets to wing it.

It isn't only unwillingness to make a decision under stress and uncertainty, that betrays a lack of moral character. Abdicating the responsibility to teach tactics with actual content, is at least as bad, with less of an excuse.

First sentence does not make sense. OR research can operate over whatever you want but it will always be based on certain assumptions. The first has nothing to do with the second. That is how math works

As to the plenty of people predicting with accuracy the desert war casualties, you miss the whole point.

The commanders AT THAT TIME, they could not possibly know about WHICH MODEL GAVE ACCURATE RESULTS.

Different models gave results differring by dozens of thousands although they were based on OR , the source of the "right answer".

The last part of your comments needs attention also.

Three points.

First, since you mentioned search theory, i will use this as an example.

In theory, you can say "let X be search radious or whatever you want".

Even if your model is accurate to define various relationships between radious and other variables , you still have a big problem when you try to apply it in practice.

The military in the field want a search radious as a specific number. Which number is that?

Is it going to be 7 miles or 5 miles for example?

In order to appoint the right values, you need perfect information which in reality you do not have.

From accurate temperature and humidity values over the certain area of operation, to the accurate state of the sea, to accurate predictions of cloud formations, and dozens other parameters.

If your idea is that you should stay passive waiting for the perfect picture before you plot your mission, good luck with that.

Second issue is that even if it was theoritically possible to gather all necessary data, during the development of operations, commanders in the field do not have the manpower, do not have the tools and do not have the time to gather feed and process all of those of details.

They did not do it in wwii and they do not do it know inspite of the fact that technology has advanced dramatically since then.

Computers today do help in many areas, but they still need data to feed them.

So at the end what really happens is that OR helps in develoment of certain norms or rules of thumb which you can certainly use in the field but they for sure do not constitute the "best asnwer" for a specific situation in the field.

The really heavy stuff in OR happen in the rear, inside, universities and institutes where whole study groups spend thousands of hours in analyzing data trying to advance OR even farther which in turn will produce more guides for the commanders in the field. However this process is slow and is conducted independent from whatever happens in the field.

A rule of thumb based on some type of scientific process may still help Marines in dveloping a sound plan in the field. As the manual says, you do not aim for the best you aim to exeecute fast a "good " plan. That is why you see Marines also using OR,. Inspite of your charges, it should be clear that they also beleive that they are better and worse plans.

It is just they do not want to hear a voice "hands up" inside their command post, while they are gathered around their computer inserting the last necessary parameters for the calculation of the "best plan".

[ April 22, 2008, 03:16 PM: Message edited by: pamak1970 ]

Link to comment
Share on other sites

"First sentence does not make sense."

It does, you just didn't get it, as your 7 miles or 5 miles example below makes painfully obvious.

"always be based on certain assumptions."

The assumption need not be "search radius is 7 miles" nor "search radius is 5 miles". It can be "search radii will vary in a roughly flat distribution with a mean around 6 miles and a spread around 1.5 miles". No straw man certainty required. You can give tactics by quintile in eighty variables if you want. They won't actually vary all that much. Assymmetric use of range or superior vision will drive lots etc.

"The commanders AT THAT TIME, they could not possibly know about WHICH MODEL GAVE ACCURATE RESULTS."

Well I could tell them, I had accurate results, so did the other political scientists I worked with, and they said so too, loudly, in public. If they have incompetent CYA staffers around them flopping all over the map, that just means they can't manage OR projects any more than they can write truthful and direct manuals.

"you still have a big problem when you try to apply it in practice."

Nope, not at all. The captain trying to apply the package of sleep inducing bromide the manuals feed him, on the other hand, hasn't got a prayer. He wings it.

"In order to appoint the right values, you need perfect information"

Exactly the same nonsense strawman the manual relies on, and exactly as false. The conclusion in fact will not turn on the number picked, as sensitivity analysis will rapidly show, and you can put in whatever you know as muddy as you know it, and get back advice exactly as fuzzy as your knowledge warrants. While still clearly seeing that these 5 bonehead tactics the captain might otherwise try as he wings it, are utterly dominated by this other set of 3 he can choose among.

The next straw man is that the right answer it too predictable, and the seat of the pants in the loop commander needs to be able to duke out his opponent, which would be a fine point if your couldn't OR poker, but you can, so... The answers just come back as strategy sets with moves and counters to them, and include ways of being fuzzy enough the other guy can't readily deduce your next move from your last etc.

Enough, the reason Marines don't teach their captains OR is they don't think they could learn it. And they don't know much of it themselves, so they relegate it to geek subspecialties that do get optimized, why their all purpose strawman bromides keep the math at bay and let testosterone and jockhood run the corps.

"If your idea is that you should stay passive waiting for the perfect picture"

Strawman again, the same one, the same three pieces of wet mangy straw, there isn't anything else. No my idea is not that you should stay passive waiting for the perfect picture. Being against the ridiculous hype of modern maneuverism doesn't mean being literally wedded to physical immobility, and thinking speed isn't a panacea is not a recommendation to trade in soldiers for rocks that sit stock still for eons.

I'll even allow that your preferred maneuver theory occasionally allows a Marine to fire at the enemy. OK? So can be drop the truly absurd?

"commanders in the field do not have the manpower"

Um, they don't have the manpower to write a manual that is cogent instead of insulting the reader's intelligence, in years, stateside, at well manicured military colleges. Manpower to analyze is not the issue, analysis is consciously rejected as tending toward less flexible thinking. The goal is to inculcate the idea that there is no best and that good enough now is better, because admitting "best" may exist could set off a search for it or a debate about it, and the whole idea is to replace those with a *decision*, *now*, by the flattered creative commander on the spot.

It is organized syncophancy. And for the manual writer, it is the perfect buck-pass - the commander on the spot is responsible for victory or defeat, and the manual is not. Which is moral cowardice, in the Marines corps' own fine phrase.

"OR helps in develoment of certain norms or rules of thumb which you can certainly use in the field"

It can, yes, but if they are then excised from the manuals as tending to teach a positive doctrine and right and wrong answers and constrain the inventive jocks thinking on their feet, then, well, out they go.

How silly does it get? The army manual became so offense oriented and so hit em where they ain't oriented, that it described the purpose of artillery deliverable minefields as facilitating counterattacks, and said reserves should not be directed at a penetrating enemy's strength (that's a "surface"), but should go somewhere the enemy isn't but still able to see and engage him (that's sophistry). It says the defense is only occasionally imposed on the army by a temporary and local set back, until the initiative can be recaptured, etc.

Meanwhile the Marine corps manual on maneuverism in warfare won't even describe a breakthrough operation or say when to employ a turning movement or describe the actual escalation chain of modern combined arms (while saying abstractly that combined arms is important, of course, in typical contentless CYA mode). Because maneuver isn't a formula, it is a way of thinking, and that way is - well, you figure it out for yourself on the spot and try something, and hope a lot. With buckets of confident testosterone.

This is why the air force fights and wins the nation's wars.

Link to comment
Share on other sites

First sentence does not make sense."

It does, you just didn't get it, as your 7 miles or 5 miles example below makes painfully obvious.

"always be based on certain assumptions."

The assumption need not be "search radius is 7 miles" nor "search radius is 5 miles". It can be "search radii will vary in a roughly flat distribution with a mean around 6 miles and a spread around 1.5 miles". No straw man certainty required. You can give tactics by quintile in eighty variables if you want. They won't actually vary all that much. Assymmetric use of range or superior vision will drive lots etc.

You are still confused.

First the radious i am talking about is a different argument than the assumsions i mentioned before.

Second, you obviously have not read the report,or you have not understood it cause you make things out of your mind.

The distribution that Koopman used for detection probability at various ranges, gives mathematically a value called "sweep width" that is used subsequently for the study and the different patterns

That parameter has a CERTAIN VALUE and it is the area under the lateral curve (distribution) of the detection probability.

Mathematically it is the sum of the function that describes the probabililty of detection curve, with -R and +R being the limits of x .

Solving this mathematically, you get a certain value

That should make sense cause otherwise, you will end up give orders to plot paths of flights with "mean values and distributions" which is total nonsense

[ April 22, 2008, 06:02 PM: Message edited by: pamak1970 ]

Link to comment
Share on other sites

Originally posted by pamak1970:

(the assumption is) for example that target distribution is uniform over the area we search

I don't buy it. Just as JasonC wrote, the answers you get tend not to fly all over the place based on every detail variable. For example, i consider it very likely that a given search radius works better both when the target distribution is uniformly thin and when the distribution is skewed and when when the distribution is lumped. The whole point of dominant strategies is that they work better than the others most of the time. Pretending that such strategies do not exist is exactly the perfection strawman. If you want a single radius number, you almost always can get it ("when in doubt, go for 6". If you accept ranges (like "5-7 is a good range"), you get those. If the answer is funny, it can often be expressed in vulgar form, such as "6 and 8 seem to be sweet spots".
Link to comment
Share on other sites

Well I could tell them, I had accurate results, so did the other political scientists I worked with, and they said so too, loudly, in public. If they have incompetent CYA staffers around them flopping all over the map, that just means they can't manage OR projects any more than they can write truthful and direct manuals.

If you want me to take you seriosuly, you have to present bibliography and the "theory" of yours with the results .
Link to comment
Share on other sites

Originally posted by Asok:

</font><blockquote>quote:</font><hr />Originally posted by pamak1970:

(the assumption is) for example that target distribution is uniform over the area we search

I don't buy it. Just as JasonC wrote, the answers you get tend not to fly all over the place based on every detail variable. For example, i consider it very likely that a given search radius works better both when the target distribution is uniformly thin and when the distribution is skewed and when when the distribution is lumped. The whole point of dominant strategies is that they work better than the others most of the time. Pretending that such strategies do not exist is exactly the perfection strawman. If you want a single radius number, you almost always can get it ("when in doubt, go for 6". If you accept ranges (like "5-7 is a good range"), you get those. If the answer is funny, it can often be expressed in vulgar form, such as "6 and 8 seem to be sweet spots". </font>
Link to comment
Share on other sites

Originally posted by pamak1970:

As i said again before about sweep width, OR can give you equations which have inside variables, but it does not give you a specific number.

No variables being "distributions" inside equations and all these nonsense that jason tries to sell.

Rubbish. In many cases, the variables you can control are not the ones that drive the outcome. For example, the variations in W can of course be expressed in a formula such as f(W), which will give you optimal search radius values in the range of 5..7. W may not matter all that much, if big-ticket things like the fuel capacity on your scout plane or ignoring your aft sector since you're faster than the enemy are anyway given. In reality, many big drivers are often effectively out of your control.

Does reducing the nuanced f(W) -> 5..7 formula to a plain "use 6" definite number reduce your effectiveness on a given flight? Often it does. Is 6 still a definite answer produced by OR? Sure, just as the formula thing was.

[ April 23, 2008, 11:38 AM: Message edited by: Asok ]

Link to comment
Share on other sites

Rubbish. In many cases, the variables you can control are not the ones that drive the outcome. For example, the variations in W can of course be expressed in a formula such as f(W), which will give you optimal search radius values in the range of 5..7. W may not matter all that much, if big-ticket things like the fuel capacity on your scout plane or ignoring your aft sector since you're faster than the enemy are anyway given. In reality, many big drivers are often effectively out of your control.
read again the article i post. If that was the case ,there would not be any issue expressed by the author,nor they would try to gather data from real life experience.

At least up until now there is no such comprehensive function which will include all the crucial parameters that affect W . The OR tries constantly to improve algorythms for more realistic results of detection in military simulations.

Do you have any link to give us?

Everybody can pull out of his head whatever he wants and make whatever claims he wants. Try to back them up instead of assuming things

Second, even if you were able to get the "5.7", that is not the optimum value to use for a particular situation. That is what the author says and it happens to be a well known name in OR community..

In other words, the commander in the field who has to deal with a specific situation, can certainly use a "rule of thumb" which might be prommissing to give the best results over a large sample of possible situations, but HE CAN NEVER BE SURE that this rule will produce actually the "perfect plan -search pattern" in his particular situation." If he makes the mistake to beleive that OR gave him the "optimum answer" and go for it, he will resemble the case of an officer who decides to attack based JUST on the fact that in 60-70% of cases ,the attacker won.

Using this type of logic ,the local commanders should automatically decide to attack in the field ,since that is the "optimum course of action".

That is something that nobody,claims ,including maneuverists. Jason often feels the need to support his positions by creating a strawman of what maneuverists really beleive, in order to argue "effectively" against it .

As to the last point, two things, first of all the issue of sweep width is also out of your control, since it is affected by weather and other things.

I guess you mean instead of "out of control" "chosen parameter"

It is true that somebody can argue about the failure of a certain study to include crucial parameters. Most times ,it is not really a failure but a necessity of simplifications and ability to solve equations . Studies focus on certain things deliberately ignoring other parameters which are assumed to be constant.

For example, you focus on exploring search patterns in relation to sweep width in a certain common enviroment where both patterns operate .

In other words, if weather for example is exactly the same for all patterns, which one promises the best results ?

So here you have a valid point which is actually related to the initial assumsions of the study.

Again there is no some type of "perfect model" which includes everything.

There are monographs of hundreds of pages dealing just with calculating probabilities of having a clear LOS between air and ground free of cloud formations at different seasons and regions.

Google CLOS "cloud free line of sight".

The saying is that "all models are wrong but some are useful". This is more relative with what i said before about the non-existance of some comprehensive equation related to W

[ April 23, 2008, 01:12 PM: Message edited by: pamak1970 ]

Link to comment
Share on other sites

Originally posted by pamak1970:

Second, even if you were able to get the "5.7", that is not the optimum value to use for a particular situation. That is what the author says and it happens to be a well known name in OR community..

[/QB]

You misread my post. First, I specifically wrote about "a range of 5..7", based on an input variable. Let weather be the variable. You know the weather pretty much, so you can use it to see if it's better to go with 5 (when it's raining) or 7 (when it's clear skies). Not a single number (5.7), but a range of numbers (5 to 7), based on an input variable. This is basically what JasonC means when he says "ensemble of conditions".

Second, I specifically wrote that dumbing down the above formula (5 when it rains, 7 when it's clear and some other stuff in between) to a single number (always 6) will likely reduce your effectiveness on any given flight. But dumbing down the models or tabulating the answers does not mean that we didn't use OR to get to the answers in the first place.

Getting rid of low-impact variables reduces the universal precision of models, but it's still often a good idea, since the cost of applying the model comes down, so we get a net benefit. For example, including the effect of the Moon's gravity in blue-water naval models has so little impact on the results that we can just leave it out and proceed. Would the results be more perfect if we included the moon's pull? Yes, they would. Would they be more useful? No, they would not.

I have a feeling that a lot of your issues with JasonC have to do with misreading his posts as well as mine.

Link to comment
Share on other sites

You misread my post. First, I specifically wrote about "a range of 5..7", based on an input variable. Let weather be the variable. You know the weather pretty much, so you can use it to see if it's better to go with 5 (when it's raining) or 7 (when it's clear skies). Not a single number (5.7), but a range of numbers (5 to 7), based on an input variable. This is basically what JasonC means when he says "ensemble of conditions"
First of all there are many parameters, second you do not know visibility conditions accurately above the search area at the time of the arrival of the search plane. How many times missions were aborted cause of weather above the target area?
Link to comment
Share on other sites

I think what happened is that you missed the following i wrote towards Jason

In order to appoint the right values, you need perfect information which in reality you do not have.

From accurate temperature and humidity values over the certain area of operation, to the accurate state of the sea, to accurate predictions of cloud formations, and dozens other parameters.

If your idea is that you should stay passive waiting for the perfect picture before you plot your mission, good luck with that.

his response about range of values came after that basically arguing that it does not matter if you are not aware of the specific conditions since you can express things as a range of values which does not make sense ,because if patterns are affected by parameters you argue they have ranged values, then your orders will point towards azimuths and distances that are also ranged values. That is useless for a pilot of a specific mission who needs to plot it on the map

[ April 23, 2008, 02:01 PM: Message edited by: pamak1970 ]

Link to comment
Share on other sites

I'm guilty of misreading myself, here. That last paragraph about precision and usefulness is basically the same thing you wrote, above. I only read the first part of your post, which is the part where you misread my post smile.gif

Why the obsession with picking the best solution? Why not settle for picking a dominant strategy. "Always attacking" is not a dominant strategy any more than placing the largest possible bet is not a dominant strategy in roulette, even though all of the biggest winners have used it.

Link to comment
Share on other sites

Originally posted by Asok:

I'm guilty of misreading myself, here. That last paragraph about precision and usefulness is basically the same thing you wrote, above. I only read the first part of your post, which is the part where you misread my post smile.gif

Why the obsession with picking the best solution? Why not settle for picking a dominant strategy. "Always attacking" is not a dominant strategy any more than placing the largest possible bet is not a dominant strategy in roulette, even though all of the biggest winners have used it.

yes, i read 5..7 as 5.7 .

I agree with the point that depending on situation the local commander will apply some value between 5 and 7 to the best of his judgment and information he has about weather conditions over the search area at the time of the mission execution.

Link to comment
Share on other sites

i'm not sure if JasonC is aware of CES procedures, because it seems a bit like that is what he is describing.

with CES procedures the commander in practice enters parameter values to predefined functions and receives a number of plans (or just a single plan) that are most likely the most effective tactic under the given conditions (as dictated by precalculated logical methodology). all commanders are trained in CES procedures.

Link to comment
Share on other sites

URD - sure. So why aren't they in the bleeding warfighting manuals? Because they tend to imply the existence of correct answers. Internally, the army in particular got quite good at this stuff in the later cold war. The air force and navy could not move without geeks and are good at it as a matter of course. The point is that the whole tendency of maneuverism pushed as a warfighting style has dropped much such useful stuff in favor of telling young officers to be Guderian, and it is silly.

Pak on the other hand is hopelessly confused about what an assumption is, what a function is, and what you can do to the former with the latter.

Mathematically, if a model takes any inputs known or unknown and produces an optimal output, it can also map the entire space of possible inputs into a space of possible outputs, whose probability density function can then be described. Functions are maps, and they don't need to operate on numbers, they can operate on entire multidimensional spaces. Computationally, you can operate symbolically on whole functions, or when that gets unwieldy simulate over whole populations of meshes in the input variables.

The sensitivity of the outcome density to actual controls is just a projection of that density onto a surface, which fully describes the controllable trade-offs, among the wider range of actual possibles. For every unknown in the input or control variables, there will be an impact on the recommendation probabilities - but you can then prune those whenever it doesn't make enough difference by whatever thresholds you like.

For the practitioner, you can boil that down to measurement or observation or assessment sets, and recommendation sets. See this, do that. Or see this, the chances break down 60-20-20 based on unobservable X.

As for the silly mistake he is describing, the author was ranging over one variable when he needed to range over another one as well, and that is exactly all. A correct description of the trade off would be, errors in estimation of that variable will reduce the risk of misses in the search area but lower the search area per unit time or per platform, or conversely, depending on whether you high-ball or low-ball it. The asset rich practioner will choose low risk of a miss within the pattern, etc.

Link to comment
Share on other sites

JasonD,

are you aware of MDMP (Military Decision-Making Process) used by the Marines (and Army)?

it's divided into a number of field manuals covering key sub components like Intelligence Preparation, Targetting Process and Risk Management.

EDIT: googled some of the manuals.

FM 5-0 (PDF) - Army Planning and Orders Production

FM 34-130 (PDF) - Intelligence Preparation of the Battlefield

FM 6-20-10 (iPaper) - Tactics, Techniques and The Procedures for Targeting Process

FM 100-14 (PDF) - Risk Management

[ April 25, 2008, 07:49 AM: Message edited by: undead reindeer cavalry ]

Link to comment
Share on other sites

Pak on the other hand is hopelessly confused about what an assumption is, what a function is, and what you can do to the former with the latter.

You can write a whole essay about what you beleive.

It is still meaningless when you do not try to use a single source of bibliography or link to show us that what you describe is the true picture of implementation of OR.

I gave you links and i pointed you specific portion of a subject you mentioned (search theory).

You refuse to follow this path and try to counter-argue by writing general thoughts ,where you feel comfortable to say whatever you want.

The point is that first, you still do not have a clue about what is an assumsion, being clear by the fact that you state that a function can affect the assumsion, although i was very clear to point that the issue of applying "correct values" has nothing to do with the assumsion issues i mentioned at first.

The latter has to do with the implementation of a theory, while an assumsion is actually the beginning point of the theory.

Second, you still bypass the issue of how to plot an optimum path of flight which will have specific geometry, when its parameters like sweep width which affect it do not have a certain value.

You can map whatever you want.The problem is still the same cause in a real situation the parameters are of specific value and correspond to a certain point of the "map" or the graph

[ April 25, 2008, 01:11 PM: Message edited by: pamak1970 ]

Link to comment
Share on other sites

pakmak - here is your problem. You are talking to a major content developer for social science applications of about the most sophisticated technical software package in the world (Mathematica), who teaches formal modeling methods to scientists as a sidelight. And you can't wrap your head around the notion that he might know more about the subject than you do.

Does one plane have to fly one search path? Yes. Can the commander be given a whole set of search patterns to hand on to the pilots, depending on the critical variables of his problem (not the pilot's, the commanders)? Of course. Does uncertainty in any lowest level variable break any step in any of it? Not at all.

Perhaps the commander has lots of assets and not much space to search, then he should low ball the search radius variable. Perhaps the mission is defensive and the critical goal variable is the range of first detection at a 5 standard deviation underperformance of expectation, because the issue is how close an enemy SSN gets to a CVN.

Perhaps the mission is purely attritional and the only thing that matters is the expected finds per unit time, and the target set is known to be rarely visible at all - then maximizing search area will matter more and highballing the figure is the right answer.

Perhaps the targets are stationary and will always be detected if passed over but the goal is attritional and they are known to be randomly distributed, then the best solution is to minimize search pattern overlap, which means highball if the search won't saturate the search space, and use potential saturation if you get there with wide search, to dial down the degree of highballing.

I don't need to know beforehand the "true" answer, 5 miles or 7 miles. I just need to know the goal and the associated costs, and that it is uncertain in that range - quite sufficient. I can get you the best expectation for what you know about the unknown. If you can find out more about it, I can tune the result to marginally increase that expectation.

Now, if you hand me nonsense than of course I can give you nonsense back. Just telling me something is an unknown will not cause any such problem. Just dependence of a solution on an unknown won't, either. If you tell me you know something that just isn't so, then sure that can yield a recommendation no sounder than your input. If you pretend you know more than you do, e.g. But a mere "known unknown", no, not an issue at all.

Here is a dead simple OR sample problem. You have a space of known size to search, searching each cell of it costs 1 (we've indexed everything else to that cost), there are n items value each v scattered across the space, known to follow a flat random distribution. V can be fixed or vary from item to item in a known set of values, up to you. You can search sequentially or many at once.

Devise the optimal search and explain what aspects of the problem it exploits. Give a theoretical best approach, and computational practical one for any problem size that will be nearly as good and can be boiled down to a simple rule of thumb. Hint - because the distribution is known to be flat random a priori, the pattern used is irrelevant. But there is still an answer - and not a wing it answer. (Adding slight bits of chance to the values or number of items will fuzz the result slightly but not change the approach).

Link to comment
Share on other sites

pakmak - here is your problem. You are talking to a major content developer for social science applications of about the most sophisticated technical software package in the world (Mathematica), who teaches formal modeling methods to scientists as a sidelight. And you can't wrap your head around the notion that he might know more about the subject than you do.
Jason

Here is the problem you have. You are talking to someone who has an engineering background and actually has worked with OR analysts in industrial engineering. My professional background and our conversations now and in the past with rediculuos attempts of yours to solve basic OR equations makes it clear to me that you are a fraud. A person with limited mathematical background , who claims whatever he wants under the cover of internet and lack of verification , counting on the fact that most members here are not interested in this type of field and will not challenge all the BS you feed them about OR .To most people here, it is better to have you by their side willing to answer their concerns about the game,rather than confront you for something so much boring to them.

I don't need to know beforehand the "true" answer, 5 miles or 7 miles. I just need to know the goal and the associated costs,”
Perhaps you are lost again because if you want to know the associated cost, you basically want to know multiple parameters including the 5 or 7 miles sweep width, because the latter affects time to cover a certain area and “cost” which is associated to a high degree to time and effort spend. Time is actually a common way in OR to measure cost of a search strategy.

“Here is a dead simple OR sample problem.”

Where is the link or any decent attempt to present the actual source of information. ?You are so dishonest top the point where you prefer to give lectures again about things you do not reveal,simply because you know that when I will track the details you are going to have hard time to respond .

You always start by repeating like a parrot something you read somewhere and when you are challenged to dig into details, you refuse to do it,preferring to give long essays talking about everything and nothing at the same time. Are you really serious that I am going to talk about a OR issue which you present according to your distorted views,without seeing the original source?

On the other hand,it is amazing that after about 5 or 6 posts of mine, you try again to argue about something irrelevant. The issue is not if a theory can calculate optimum in some cases with random parameters. The issue I pointed for the last 5-6 posts were

1. The assumsions used to build the theory

2. The IMPLEMENTATION or APPLICATION of the theory in practice.

Implementation comes after the full development of the theory. So giving me a theoretical problem trying to argue that you somehow counter my arguments,just saw that you are very slow in grasping the context of an English document you read.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...