Jump to content

How Hot is Ukraine Gonna Get?


Probus

Recommended Posts

8 minutes ago, dan/california said:

I am just saying a we should apply a bit of realism now, and not after we have gotten a couple of heavy Brigades cut into little tiny pieces. At which point we will do it in a panic, badly.

Yeah ok.  I guess we’re really talking about whether we can cut up the enemy’s heavy brigades in a heavy EW environment, though?

I suppose this hypothetical problem is analogous to Russian spec ops being authorised to fire Shmel rockets at Beslan, where western equivalents would have had to be a little more… tactful.  Not sure that counts as the Russians being more effective, though…?

Link to comment
Share on other sites

1 minute ago, billbindc said:

once the software is written it is a fairly low cost decision to add it on. 

I'm not sure, 'train as you fight.'

Maybe low cost to add it, but how long to proof it, train on it, and field it, widely?

Link to comment
Share on other sites

1 minute ago, Tux said:

Noted. However, do the devices in your example reliably guide drones to the correct targets?  If so why would the West supposedly not permit that?  If they don’t then is there any significant benefit gained from their use, vs the added cost each time the drone selects (for example) a civilian target in error?

I am not sure if western countries already have legislation preventing that, but this is definitely a scary subject and no Western country will just issue an order for autonomous drones without huge debates. Why? Because you have robots hunting people and people are genuinely afraid of that.  That is just how the human mind works - we all have a mental image of what other humans may be thinking and are relatively relaxed around each other. But we are scared around large wild animals because we just do not know what they are "thinking". And we have no mental image of what an AI component of a drone seeker is "thinking".

For example, from various persons attempts to use Chat GPT I noticed, that it has no compunctions against lying to answer the question. It will make the answer from the whole cloth from time to time. Some US lawyer got into big trouble because of that when he used Chat GPT to draft a pleading to court. Chat GPT made references to precedent allegedly supporting his position but they were completely made up - judgements never made, which the presiding judge obviously noticed. The unfortunate innovator received disciplinary charges for all his trouble. A colleague of mine was testing Chat GPT on routes to various places. The AI made up place names, street names, the whole lot. Is this something specific to this iteration of AI? Can it be avoided? How would that impact targeting processes? No idea.

Link to comment
Share on other sites

2 minutes ago, OBJ said:

Given what we know about machine learning...the machines are apt to do it better than humans. The machine won't be scared, tired, angry, sorrowful, melancholy, vengeful, hungry, or feeling cold or wet either.

True, problem is context.  Machines do not understand situational context.  Example, we clipped a guy in Afghanistan for holding a cellphone in the wrong place for too long.  That took human judgement to make righteous because we understood context.  Personally, my hardest moment was not shooting a guy, and I made that snap judgement based on the look in his eyes and bunch of non-verbal queues (which turned out to be a very good thing).  

Machines are not set up for that.  Is that guy digging a hole for an IED or water - a bunch of factors roll into determining which is which.  Machines are not there yet. 

Link to comment
Share on other sites

1 minute ago, OBJ said:

I'm not sure, 'train as you fight.'

Maybe low cost to add it, but how long to proof it, train on it, and field it, widely?

If I read the tea leaves correctly, I think a lot of the proofing is already in train.

Link to comment
Share on other sites

5 minutes ago, OBJ said:

Given what we know about machine learning...the machines are apt to do it better than humans. The machine won't be scared, tired, angry, sorrowful, melancholy, vengeful, hungry, or feeling cold or wet either.

Not sure the current state of “driverless car” tech supports this.  Machines don’t get tired but at the moment they might struggle to see an enemy soldier with a sun tan

Link to comment
Share on other sites

2 minutes ago, The_Capt said:

True, problem is context.  Machines do not understand situational context.  Example, we clipped a guy in Afghanistan for holding a cellphone in the wrong place for too long.  That took human judgement to make righteous because we understood context.  Personally, my hardest moment was not shooting a guy, and I made that snap judgement based on the look in his eyes and bunch of non-verbal queues (which turned out to be a very good thing).  

Machines are not set up for that.  Is that guy digging a hole for an IED or water - a bunch of factors roll into determining which is which.  Machines are not there yet. 

Think we're maybe talking past each other low intensity limited war vs high intensity major power conflict.

BTW, glad you are here with us to tell the story.

Link to comment
Share on other sites

3 minutes ago, billbindc said:

If I read the tea leaves correctly, I think a lot of the proofing is already in train.

Other than feeling good about it, not sure what human in the loop capability gets us if our doctrine, training and logistics all say we go autonomous before hostilities.

I think all sides are pretty worried about first strike.

Link to comment
Share on other sites

5 minutes ago, Tux said:

I suppose this hypothetical problem is analogous to Russian spec ops being authorised to fire Shmel rockets at Beslan, where western equivalents would have had to be a little more… tactful.  Not sure that counts as the Russians being more effective, though…?

An actual example. Ukrainians taught urban combat by the English complained that they were trained in accordance with the most recent experience of the army of His Brittonic Majesty, which is Afghanistan. The drills were very careful and would be perfect for a search of a house suspected of having a cache of weapons hidden among the civilians. In the Ukraine, the preference is to demolish a bulding with HE without clearing it at all. If this is not possible and the building has to be cleared, the first visitor to any room is a frag grenade.

If the Brits tried to be tactful the above way, that would certainly make them less effective.

Link to comment
Share on other sites

7 minutes ago, Tux said:

Not sure the current state of “driverless car” tech supports this.  Machines don’t get tired but at the moment they might struggle to see an enemy soldier with a sun tan

Maybe, your thoughts on the comparison between the machine learning data base set for high intensity major power conflict targeting vs that for driverless vehicles?

Link to comment
Share on other sites

12 minutes ago, Tux said:

Not sure the current state of “driverless car” tech supports this.  Machines don’t get tired but at the moment they might struggle to see an enemy soldier with a sun tan

Getting into part of what I do for a living here and actually, the job driverless car sensors and software must handle is literally the opposite of an autonomous suicide drone. The former must navigate every highly complex driven environment and avoid hitting anything. An autonomous suicide drone can be geofenced and must just hit the likeliest right thing most of the time. It's an order of magnitude easier. 

Link to comment
Share on other sites

44 minutes ago, The_Capt said:

I suspect we are looking at hybrid system.  Human control up to a release point and then a pre-authorized kill box will full autonomy once in a denied space.  Target discrimination will be a major argument - hell we have problems with humans doing it right now.  It may have to stay at the vehicle/anti-material level - we can program in restricted targeting to military vehicles but individual people is really hard to do.  Is the person carrying a gun?  Are they acting threateningly.  Are they being shady?  Under ROEs a human can engage on all of these, but I doubt we will trust a machine with all this for some time.

This is already standard.  Commercial drones can automatically return "Home" if signal is lost.  That is 100% autonomous flight.  Extending that to flying to a specified point ahead of time is already happening.  So flight paths are already being used autonomously.  We've even seen target recognition being used in drones operating in Ukraine.  So I think we're already seeing what you've described and more.

Steve

Link to comment
Share on other sites

On a unrelated note - some of the footage of FPV drone attacks on tanks show the tank moving, making evasive maneuvers, being hit by one drone, still moving, hit again, and again and finally catching fire. It reminded me of something, but I could not define it. Today I found it: the footage of Prince of Wales and Repulse at Cape Kuantan, 1941. 

After similar experiences, US battleships were developed largely into the direction of floating AA batteries, with secondary shore bombardment role. Maybe that is the direction where the tank will develop as well - a platform with anti-drone weapons, which potentially can shoot up the enemy with a big cannon if it makes it as far  the direct LOS. 

Edited by Maciej Zwolinski
Mixed up exotic names
Link to comment
Share on other sites

From one point of view, a basic mine is an autonomous drone with extremely low mobility,  a crappy sensor package (touch only), and the targeting decision logic is "kill anything you see" with no human veto in the loop.

Mines don't scare people as much as drones though because they don't come find you while you are sitting around in your trench. We have the illusion of agency,  where since it is our actions that trigger the mine, we can believe that we have a degree of control by making better choices. 

(A bit like why people distrust automation  in cars. Even if it was objectively safer overall, we don't like being in a dangerous situation where we have no input. We'd rather have some influence ourselves even when the data show that it's more dangerous for us overall).

Link to comment
Share on other sites

Adding to what @TheVulture said: there is not much difference in the end result between sending a drone into a zone with the order to kill any human it can find and sending an artillery shell into the same area.

Apart from hand-to-hand combat, everything is a remote kill. The really scary part is when the AI can decide when to send a drone to kill somewhere. Hello Skynet.

Link to comment
Share on other sites

1 hour ago, OBJ said:

I'm not sure, 'train as you fight.'

Maybe low cost to add it, but how long to proof it, train on it, and field it, widely?

Very fast. It is being trained in virtual (Look up Nvidia Omniverse and Microsoft’s drone simulator successor to Airsim) and real environments. If you can basically train 100 years of engagements in a few months, and supplement that with real training exercises, you are at looking at a quicker procurement cycle than pretty much anything else the military can muster. Fielding it is simply downloading the software and making sure the singature is ok.

Link to comment
Share on other sites

1 hour ago, poesel said:

There is not much difference in the end result between sending a drone into a zone with the order to kill any human it can find and sending an artillery shell into the same area

Except for significant difference that there is zero confusion over who gets a kick in the balls when there is a screwup and an artillery round lands on either blue or yellow, rather than red.

Edited by JonS
Link to comment
Share on other sites

3 hours ago, dan/california said:

What I am trying say is that the military effectiveness of giving said quadcopters a kill box and a set of targeting priorities is going to be at least an order order of magnitude more effective than making each of them phone home for permission

You are confusing presumed efficiency for presumed effectiveness, which rather calls into question any conclusions or deductions you are attempting to assert.

Link to comment
Share on other sites

1 minute ago, JonS said:

You are confusing presumed efficiency for presumed effectiveness, which rather calls into question any conclusions or deductions you are attempting to assert.

If communications are effectively jammed by by EW i would argue both would go up a lot for a drone that doesn't need to phone home versus one that does.

Link to comment
Share on other sites

2 hours ago, kimbosbread said:

Very fast. It is being trained in virtual (Look up Nvidia Omniverse and Microsoft’s drone simulator successor to Airsim) and real environments. If you can basically train 100 years of engagements in a few months, and supplement that with real training exercises, you are at looking at a quicker procurement cycle than pretty much anything else the military can muster. Fielding it is simply downloading the software and making sure the singature is ok.

Have western militaries decided to do this? I agree machine learning is amazing compared to human, no comparison really, but, fast is too late if implementation comes after need.

How long for humans to assemble, vet and configure the data sets to be used to train offensive military AI?

If AI creators can't explain how their creations work, how long to build human confidence, at least in the west, in offensive military AI before going beyond proto types? After that, how long for fully integrated fielding?

Maybe just simple instantiation but guessing offensive AI will be used on multiple platforms, and uses, i.e. single vs swarm, so need for some platform specific customization - Key Dan and concerns about those perfidious DoD contractors.

and I'm sorry, when I said training, I was thinking more of training the initial set of people in the military organizations that will need to perform all the tasks associated with autonomous offensive military AI operations, everything from commanding units employing offensive AI assets, to updating learning data sets, to software versions, to writing operations and support doctrine, to assigning missions, to ISR networking, to BDA, to rearming, to repair and maintenance for the reusable platforms, etc, etc.

 

Did you know in the first naval night battle off Guadalcanal 8-9 Aug 1942, some of the US ships had radar (the Japanese did not but did have superb night training and optics), but the commanders didn't know how to use it and discounted it's value? Many factors but still result, major allied defeat.

https://en.wikipedia.org/wiki/Battle_of_Savo_Island

As late as Nov 1942 US Navy task force commanding admirals placed the ships with the most capable radars at the rear of their formation rather than the van. Many factors but result, major allied defeat.

https://usnhistory.navylive.dodlive.mil/Recent/Article-View/Article/3207198/radio-over-radar-night-fighting-chaos-at-guadalcanal-12-13-november-1942/

Fun fact - The US Navy actually had more sailors killed in the naval battles off Guadalcanal than the marines did on Guadalcanal.

 

Realizing the full potential of new military technology takes time, best not to be figuring out how the dang thing works when you're getting shot at.

Edited by OBJ
Link to comment
Share on other sites

24 minutes ago, OBJ said:

Have western militaries decided to do this? I agree machine learning is amazing compared to human, no comparison really, but, fast is too late if implementation comes after need.

DARPA was working on this kind of system at least 5 years ago for drone swarms, and obviously this was used for the famous grand challenge that was around 20 years ago. You know how combat video games look pretty real? This is the same thing, except the computer is playing the game. Seriously check out https://microsoft.github.io/AirSim/ for something that is stone age by today’s standards. 

24 minutes ago, OBJ said:

How long for humans to assemble, vet and configure the data sets to be used to train offensive military AI?

I honestly think assembling the training data is the bigger challenge, and I think LLMs offer a great opportunity here, where you can in effect describe a scenario or battlefield with words, and have it generate a few hundred and then humans can inspect and make sure it’s not completely unrealistic.

24 minutes ago, OBJ said:

If AI creators can't explain how their creations work, how long to build human confidence, at least in the west, in offensive military AI before going beyond proto types? After that, how long for fully integrated fielding?

How does AI work? We are talking about mostly computer vision here, so this is pattern matching plus a priority list of targets, and maybe a playbook of multi-drone attacks. This isn’t an LLM hallucinating whatever it hallucinates. You can run the same kind of tests other missiles get, and you can look at what FPV operators in Ukraine are doing. You can even use human FPV pilots as your baseline, and train your models based on what they do.

Look, the evolutionary pipeline is pretty obvious:

  • FPV drone, with pilot and transmitter in a hide or trench
  • Same setup, but the transmitter/base station can now identify targets and let the pilot select them, and then the drone does the rest (but controlled by the base station)
  • Push some of the computer vision hardware onto the drone, so it doesn’t need to have an always stable link to pilot
  • ???
  • Autonomous death swarm!
Link to comment
Share on other sites

11 minutes ago, kimbosbread said:

You can run the same kind of tests other missiles get,

Hmm. Long range missiles that hit ground targets tend to go to a geographic location - a lat/long - rather than seeking a 'thing' to hit. Shorter range missiles that target a thing (eg javelin) are told which thing to go after by a human operator.

Missiles that operate against maritime targets are looking for big metal boxes on a large, wet, and flat but otherwise featureless table. In particular there are notably few non-combatants walking about the place.

Air missiles are looking for metal darts in a vacuum that's even less feature-filled than the ocean.

I'm not sure how testing for those applies to drones?

Edited by JonS
Link to comment
Share on other sites

12 minutes ago, kimbosbread said:

Pretty cool. What I read said started in 2017, project complete in 2022, so recent. The computer did run the drone into a tree, twice, presumably intentionally :) As you and others said, autonomous flight control and pathing is already a thing.

21 minutes ago, kimbosbread said:

I honestly think assembling the training data is the bigger challenge, and I think LLMs offer a great opportunity here, where you can in effect describe a scenario or battlefield with words, and have it generate a few hundred and then humans can inspect and make sure it’s not completely unrealistic.

Will have to trust you here, LLMs maybe, don't know enough about how descriptive language would enable properly networking with other units, prioritize platform survival actions, movement for target acquisition and engagement, in a dynamic battlefield setting in different biomes and weather conditions, EM and ADA environments, etc. to produce useful AI, also how to integrate into AI training multiple target type image variations

12 minutes ago, kimbosbread said:

How does AI work?

We'll have to get through the politicians, at least some of them, armed services committee, you know we will. They:
1. Heard Chat GPT lies
2. Heard Chat GPT makes s**t up
3. Heard Geoffrey Hinton and other AI scientific heads say AI could end humanity 
3. Went to see Mission Impossible: Dead Reckoning Part 1

and, at least in congress, only 1 in 6 has any military experience let lone the kind of military experience that would be useful in understanding the technology's military applications

35 minutes ago, kimbosbread said:

Look, the evolutionary pipeline is pretty obvious:

  • FPV drone, with pilot and transmitter in a hide or trench
  • Same setup, but the transmitter/base station can now identify targets and let the pilot select them, and then the drone does the rest (but controlled by the base station)
  • Push some of the computer vision hardware onto the drone, so it doesn’t need to have an always stable link to pilot
  • ???
  • Autonomous death swarm!

Trust you here too, lot of sub-steps in ??? :) 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...