Jump to content

How Hot is Ukraine Gonna Get?


Probus

Recommended Posts

17 hours ago, cesmonkey said:

Lengthy analysis by Konstantin Mashovets of the Kharkiv front:
https://t.me/zvizdecmanhustu/1877

 

Thanks for that.

My take on this is more evidence that the Russian offensive over the border opposite Kharkiv is over.  However, there are some ways Russia could still make things difficult for the defenders.  The problem for Russia is that they seem to have burned out their initial force and reserves, so to have any chance of gaining new ground they'll need to commit units from the 44th Corps, which were suspected to be available for the stalled/cancelled Sumy border incursion.  And if they do commit these forces, then it will be clear to Kyiv that the Sumy threat is over and act accordingly

Mashovets seems to be suggesting that Russia may risk removing battalion sized infantry and/or VDV units from the south and Luhansk sectors in order to maintain stress in the Kharkiv area.  The problem with that is some of these units are needed to put pressure on Kupyansk from the east, so shifting them to the Kharkiv bump to attack from the West isn't likely to produce good results since strong attacks from both sides are likely the only hope they have of moving closer to Kupyansk.

 

Taking a step back, this information reinforces my earlier assessment that Russia has concluded there isn't much hope of major gains in either Luhansk or Donetsk.  What it's trying to do now is draw enough Ukrainian forces away to make some sort of difference in the north of Luhansk.  But Mashovets doesn't think they have enough combat power to do it, which is probably accurate.

Steve

Link to comment
Share on other sites

Yeah, like you’ve been saying so they push and take some territory with big losses, but then what? Unless they can push on a broad front, they won’t be able to sustain their push, and it’s not clear they forces capable of this. This isn’t a pocket they are trying to close.

So what’s next? The arrival of F16s now that the air defenses (and radars in particular) in Crimea have been degraded?

Link to comment
Share on other sites

2 hours ago, JonS said:

Well, given that drones are changing warfare all by themselves, that seems like the least they could do, no?

Even if they're not going to make it, they could at minimum deliver it.

Gawd, typical gunner.  Drone collects oranges, squeezes oranges, puts little umbrella in glass, delivers orange juice - “hey there is too much pulp…this whole drone thing is overblown hogwash!”

 

 

Link to comment
Share on other sites

56 minutes ago, The_Capt said:

Gawd, typical gunner.  Drone collects oranges, squeezes oranges, puts little umbrella in glass, delivers orange juice - “hey there is too much pulp…this whole drone thing is overblown hogwash!”

 

 

One of the first times I heard of drones being used commercially was in the Great Lakes area where someone was delivering beer to ice fishermen.  FAA put a stop to that, but I'm going to guess he didn't have a license to deliver alcohol anyway.

Steve

Link to comment
Share on other sites

Posted (edited)
2 hours ago, The_Capt said:

Gawd, typical gunner.  Drone collects oranges, squeezes oranges, puts little umbrella in glass, delivers orange juice - “hey there is too much pulp…this whole drone thing is overblown hogwash!”

 

 

See? It wasn't an unreasonable request after all.

Wait, what was your point again? ISTR it was that delivering juice was impossible, or something like that.

Also, yes: gunners have standards, for which I shall not apologise. You can continue to drink beaver piss and swamp water if that is your wont. I shall be drinking craft beers and - as it turns out - freshly-squeezed low-pulp orange juice.

image.jpeg.25f4515b4e25a584bba04c8d1199302e.jpeg

Edited by JonS
Link to comment
Share on other sites

This part of today's ISW update made me chuckle.

Quote

Russian Security Council Deputy Chairperson Dmitry Medvedev threatened Russian internet technology and telecommunications company Yandex because its large language model failed to provide responses that cohere with ongoing Russian information operations. Medvedev criticized Yandex's Alisa voice assistant (ostensibly similar to Amazon's Alexa) on May 19 for being unable to answer questions about the US law approving the seizure of Russian foreign assets or supposed monuments in Ukraine to Nazi sympathizers.[17] Medvedev asserted that Yandex's artificial intelligence (AI) is a "coward" for failing to provide his desired answers to these questions and suggested that Yandex may be concerned about offending its Western clients. Medvedev suggested that Yandex's supposed unwillingness to provide answers to these questions greatly undermines trust in Yandex's products and could provide grounds for the Russian government to recognize Yandex's services as "incomplete" and even identify Yandex's current managers as "foreign agents." Russian news outlet RBK reported that Russian officials have previously submitted complaints against similar large language models for failing to generate sufficiently patriotic responses.[18] Russian officials will likely continue to struggle with shortcomings of large language models that are well known to others with more experience of those systems as the Kremlin continues efforts to solidify its control over the Russian information space.

Can you shoot an AI for cowardice? I would think they would find a way.

Armando Iannuci needs to get his pencil out when all this is over.

Link to comment
Share on other sites

8 hours ago, JonS said:

Wait, what was your point again? ISTR it was that delivering juice was impossible, or something like that.

My point was that your point was silly and unrealistic.  But if you have forgetting my point then yours has no doubt retreated in the darkness never to return.

An entire military trade built around error and missing - not an entirely surprising outcome.

Link to comment
Share on other sites

10 hours ago, Battlefront.com said:

Some good news from Haiduk!  It's good to see you back after some days off ;)

  1. The ship "Tsyklon" was hit and sunk, which is modern ship that is responsible for launching Kalibur missiles.  Not only that, but it wasn't the useless ship "Kovrovets".  On top of that, it's another score of a Russian ship that got sunk because Russia was too stupid or desperate to keep it away from Sevastopol. 
  2. Kadyrovites got a nasty surprise from GLSDB's which supposedly Ukraine hasn't been able to use effectively.  They looked pretty effective to me!  The loss of a plane and pilot, possibly connected with this attack, is sad to hear.
  3. Some evidence that Russia is having to prioritize ATGM allocations and the Kherson area is (not surprisingly) being shorted.  Inside this good news is more hints that Russia is running its Cold War stocks down as older missiles are apparently being distributed to troops that aren't functional.  I'm a little surprised that Russia doesn't have the ability or willingness to "refresh" potentially defective missiles before giving them out.
  4. Röpke appears to have spread some misinformation.  IIRC he is good at that, though I haven't heard anything out of him in a long time (his Twitter feed is ancient).  Sneaking into a gray zone, taking a picture, then heading out is a trick both sides use to spread a false message.
  5. An attack on Russian petro infrastructure in Vyborg by very long distance drones.  This is interesting because it's near a NATO country.  I wonder if Finland was given a headsup?

That's a decent enough list for this weekend, so if I missed anything I think I can be forgiven :)

Steve

Well you missed another attack on an airport. 

https://mastodon.social/@MAKS23/112473360484915234

Quote

✈️💥 Satellite images of the Kushchevskaya airfield, which was attacked by a UAV on May 19.

 

❗️Judging by photo, 3 Su aircraft were destroyed/damaged. All others have been moved or left the airfield.

 

Link to comment
Share on other sites

Another thing to consider, that Russians Kharkiv adventure is putting a lot more pressure on western governments to allow use of western weapons inside Russia. It might be quite painful for that to happen for as little as Russia has gained.

Link to comment
Share on other sites

18 hours ago, chrisl said:

You're overthinking the autonomy and mentally turning "autonomy" into "AGI".

Um, no, I'm not, and I have no clue how you arrived at that conclusion. The closest I got to AGI was mentioning ChatGPT. And while that is a marvelous piece of technology compared to everything we had a few years ago, it is not AGI. My point was that hardware requirements scale with how smart you want the drone to be and that not everything AI (where AI means is sadly use synomymously for all machine learning etc. stuff these days, include image classification) runs on a mobile phone, even if it isn't ChatGPT.

18 hours ago, chrisl said:

I can make an autonomous system that will process video data and run on a 5 year old snapdragon. It's not going to have AGI or be able to write sloppy term papers, but it will identify and prioritize objects of interest, even if it hasn't seen them before.

That's a large claim to make and you provide nothing to back it up. 😉 Especially the "even if it hasn't seen them before" part. Snapdragon or not, as far as I know (granted, that may not be far enough) even the more modern neural network architectures still struggle with "out of distrubtion" inputs (i.e. things they haven't seen during training) or "generalization". You can train a network on images of cats and it will usually do a good job of identifying cats looking similar to those in the training set. It will have a hard time with red (really red: crimson or similar) cats, though, unless you specifically trained the network to ignore colors or gave it pictures of red cats.

Which plays into this:

18 hours ago, chrisl said:

A few sensors and some rules is probably sufficient if you can send them to an environment where there are no friendlies.

It is not just about having only "valid" (no friendlies, no civilians) targets in the area. You (possibly) also don't want your drone to be wasted on the wrong targets or be easily fooled. It is one thing to have a poster boy of tank sitting in an open field. A simple image recognition will struggle with a tank with a very different camo than what it was trained on or with extra shapes on it. Or a cope cage, a turtle shell, you name it. Not to mention partial or occlusion, bad weather, etc. You don't want your drones to go for cardboard tanks ignoring the real ones. 

What's more: Maybe a snapdragon is enough for detecting and tracking targets. If that is all you want to do, perfect. If you want your drone to navigate a dense forest while coordinating with the rest of the drone swarm than all that has to run on top of the image recognition and that snapdragon may no longer be sufficient.

All that said, I think we agree that mobile phone hardware is very probably sufficient for primitve autonomous drones. As I said above, my entire point was that hardware requirements scale with what you want the drone to do. And I'm fairly certain that mobile phone hardware is not going to solve all your heart's desires in that regard.

 

Link to comment
Share on other sites

2 hours ago, Butschi said:

All that said, I think we agree that mobile phone hardware is very probably sufficient for primitve autonomous drones. As I said above, my entire point was that hardware requirements scale with what you want the drone to do. And I'm fairly certain that mobile phone hardware is not going to solve all your heart's desires in that regard.

Yes, that is the primary point to focus on.  Simplistic autonomous flight and targeting doesn't require large and expensive drones.  Will it be perfect?  No, for sure it will not be.  Just as the 10s of thousands of commercial FPV drones used thus far were not perfect either.  But they were effective and effective is what matters most.

You mentioned spoofing to fool AI pattern recognition.  This is indeed possible to do just as it is being done now with WW2 type dummy targets.  Some of which are extremely sophisticated.  However, dummies are expensive and really only practical for bigger ticket items or something that is expected to be sitting still.  Coming up with dummies to mimic a fast moving tank are unlikely.

Steve

Link to comment
Share on other sites

24 minutes ago, Battlefront.com said:

You mentioned spoofing to fool AI pattern recognition.  This is indeed possible to do just as it is being done now with WW2 type dummy targets.  Some of which are extremely sophisticated.  However, dummies are expensive and really only practical for bigger ticket items or something that is expected to be sitting still.  Coming up with dummies to mimic a fast moving tank are unlikely.

I was thinking about a much simpler method. Have a few cardboard billboards with images of tanks on them placed prominently. A simple image recognition network that has only a single camera as sensor has no 3D information and so has no way of deciding if the tank is a real tank or just a flat image if it comes more or less at a right angle. Place the others covering different angles. Sure, a stationary drone trap but a very cheap one. Mount them on cars for moving targets.

Link to comment
Share on other sites

Posted (edited)
52 minutes ago, Butschi said:

I was thinking about a much simpler method. Have a few cardboard billboards with images of tanks on them placed prominently. A simple image recognition network that has only a single camera as sensor has no 3D information and so has no way of deciding if the tank is a real tank or just a flat image if it comes more or less at a right angle. Place the others covering different angles. Sure, a stationary drone trap but a very cheap one. Mount them on cars for moving targets.

A moving camera has 3D information.

I used to bike race with a guy who had only one eye.  He could not only ride close in a paceline at speed just fine, he could yell at the person in front of him for not following close enough.

Training the algorithm is the part that requires a lot of computing power and data. You can run it on something much lighter weight than you use for training.  A rough example (I don't know that they even use ML for it) is terrain relative navigation for Mars landings.  That runs on a machine that's got the capability of the middle of the line 1998 mac laptop (Rad750 at something like 200 MHz, not even as fast as the "high end" WallStreet).  That machine doesn't do all the pre-processing necessary to make it possible, it just takes video input and drives actuators.  

(ETA: I did a little search and the Mars helicopter does all its navigation using autonomous feature detection in real time.  It's running on a snapdragon, which is the real significance of the helicopter - it's many generations later than the computer that runs the rover that drives it around.  The light-time to Mars makes joysticking impossible.)

Edited by chrisl
Link to comment
Share on other sites

Posted (edited)
59 minutes ago, Hapless said:

Not sure this one has surfaced here yet:

Obviously there's a lot of focus on the positives of drones, but how often do we think about how much they can encourage higher commanders to micromanage?

So that was UKR drone w overlaid RU radio intercept?  Fascinating, thx for sharing Hapless (oh, and thanks for all the youtube vids while I am at it).  So the commander is micromanaging all the way down to "tell them to fire at the ATGMS" as if they couldn't figure that out?  He sounded more like an obnoxious fan at a sporting event than a commander of anything.  Then, as punch line for all of, there's huge bavovna as they are ordered to drive over the mines.  Made my day.

Edit: or was than an ATGM hit that caused the explosion?

Edited by danfrodo
Link to comment
Share on other sites

51 minutes ago, Butschi said:

I was thinking about a much simpler method. Have a few cardboard billboards with images of tanks on them placed prominently. A simple image recognition network that has only a single camera as sensor has no 3D information and so has no way of deciding if the tank is a real tank or just a flat image if it comes more or less at a right angle. Place the others covering different angles. Sure, a stationary drone trap but a very cheap one. Mount them on cars for moving targets.

This, or other tricks like it, would work against modern AI*. We might find ways to teach AI to get wise to such tricks in the near-future (hard to guess how "near", but surely it can't be an impossible problem). So I guess a good question might be, are we discussing modern warfare (say, now to 5 years from now) or near-future warfare (say 10-20 years from now)? If we're talking modern warfare, then stupid-seeming tricks like this will almost certainly work very well against any AI that are put into the field. If we're talking near-future warfare, it's reasonable to assume that we'll have figured out how to make AI that doesn't fall for these sorts of tricks (probably, though the pace of technological development is notoriously difficult to predict, with some breakthroughs coming earlier than anticipated and others coming far later than expected).

*First rule of AI. What's hard is easy, and what's easy is hard. If it's prohibitively difficult for a human to do, chances are it's trivially easy for an AI to do. But if it's something that a human finds to be trivially easy (such as recognizing the difference between a tank, and a billboard with an image of a tank), there's a good chance it will stump a modern AI. The hard part of getting AI to work they way we want it to work isn't figuring out how to get it to do the things we think are hard. It's figuring out how to get it to do the things we think are easy.

Link to comment
Share on other sites

26 minutes ago, chrisl said:

A moving camera has 3D information.

How so? Sorry but your statement is really easy to falsify. Just use my billboard example: If the camera moves straight towards a 2D plain that is perpendicular to the movement that camera has exactly zero 3D information and no way of finding out that it is a 2D plain.

I know what you intend to say, you can derive 3D information, making assumptions, using heuristics and in the right circumstances. If you see a car moving you can derive distance and speed once you make an assumption about its size and shape. You use your experience to do that (a child will have a harder time than you have) but you can be entirely wrong if the car you assumed to be 5m long is, in fact, just a toy car of 5cm.

Moreover that isn't how simple detection and tracking works. Usually detection is frame by frame and afterwards tracking tries to match the indivual detections to each other (maybe priming detection to take a closer look at a region of interest). But it is not "ok we now have 100 detections of this tank, lets put them all together and see if it is really a tank". You can always do better, sure, but better is more expensive.

53 minutes ago, chrisl said:

Training the algorithm is the part that requires a lot of computing power and data. You can run it on something much lighter weight than you use for training.

This isn't necessarily true, either. Depends on how the training is done and how inference is done later. For instance, in some cases, when using the network afterwards you draw many samples from some distribution. During training often just one sample was used (but for a batch of different input data) whereas later on you draw many samples for just a single input situation. That is not necessarily less computing (or memory!) intensive than training. Also, during training you don't care about things like real time capabilities. Coming back to our drone, it can really make a difference if you can make updates with 0.5 Hz or 10 Hz. Even if training needs more compute power that doesn't mean you can run every trained model in real time on a phone.

Link to comment
Share on other sites

2 minutes ago, Centurian52 said:

This, or other tricks like it, would work against modern AI*. We might find ways to teach AI to get wise to such tricks in the near-future (hard to guess how "near", but surely it can't be an impossible problem). So I guess a good question might be, are we discussing modern warfare (say, now to 5 years from now) or near-future warfare (say 10-20 years from now)? If we're talking modern warfare, then stupid-seeming tricks like this will almost certainly work very well against any AI that are put into the field. If we're talking near-future warfare, it's reasonable to assume that we'll have figured out how to make AI that doesn't fall for these sorts of tricks (probably, though the pace of technological development is notoriously difficult to predict, with some breakthroughs coming earlier than anticipated and others coming far later than expected).

*First rule of AI. What's hard is easy, and what's easy is hard. If it's prohibitively difficult for a human to do, chances are it's trivially easy for an AI to do. But if it's something that a human finds to be trivially easy (such as recognizing the difference between a tank, and a billboard with an image of a tank), there's a good chance it will stump a modern AI. The hard part of getting AI to work they way we want it to work isn't figuring out how to get it to do the things we think are hard. It's figuring out how to get it to do the things we think are easy.

Sure, countering my simple trick is doable. We were talking about cheap autonomous that can be done now and a cheap counter to that. In 10-20 years our mobile phones - if mobile phones are still a thing then - will put our present day gaming rigs to shame and given how rapidly AI has developed during the last decade we can only guess what things will look like in 20 years.

Link to comment
Share on other sites

1 minute ago, Butschi said:

How so? Sorry but your statement is really easy to falsify. Just use my billboard example: If the camera moves straight towards a 2D plain that is perpendicular to the movement that camera has exactly zero 3D information and no way of finding out that it is a 2D plain.

I know what you intend to say, you can derive 3D information, making assumptions, using heuristics and in the right circumstances. If you see a car moving you can derive distance and speed once you make an assumption about its size and shape. You use your experience to do that (a child will have a harder time than you have) but you can be entirely wrong if the car you assumed to be 5m long is, in fact, just a toy car of 5cm.

Moreover that isn't how simple detection and tracking works. Usually detection is frame by frame and afterwards tracking tries to match the indivual detections to each other (maybe priming detection to take a closer look at a region of interest). But it is not "ok we now have 100 detections of this tank, lets put them all together and see if it is really a tank". You can always do better, sure, but better is more expensive.

This isn't necessarily true, either. Depends on how the training is done and how inference is done later. For instance, in some cases, when using the network afterwards you draw many samples from some distribution. During training often just one sample was used (but for a batch of different input data) whereas later on you draw many samples for just a single input situation. That is not necessarily less computing (or memory!) intensive than training. Also, during training you don't care about things like real time capabilities. Coming back to our drone, it can really make a difference if you can make updates with 0.5 Hz or 10 Hz. Even if training needs more compute power that doesn't mean you can run every trained model in real time on a phone.

So if the camera moves at exactly the right (or really wrong) path, you can't see that the thing is flat.  So what.  Drones aren't going to just be moving in a straight line and not gimbaling the camera.  You're thinking too automotively.  You don't even need heuristics or assumptions if you have a reasonable IRU (which can be done on a chip) - you can fly around and build a 3D model.  My dentist does that realtime now with a stick that they wiggle around in my mouth to make crowns, instead of making a casting.

Tracking without doing individual frame by frame detection is not only possible, but there are multiple ways to do it.  Doing frame by frame and stitching sucks when you have low SNR or are near the resolution limit, but you can improve SNR by taking advantage of relative motion of the camera and scene.

Link to comment
Share on other sites

1 hour ago, Hapless said:

Obviously there's a lot of focus on the positives of drones, but how often do we think about how much they can encourage higher commanders to micromanage?

I heard from American acquintances who'd served in Afghanistan at least ten years ago about multiple instances of senior NCOs who effectively hijacked mast/balloon mounted surveillance cameras to spy on their own patrols to ensure that no one was violating petty uniform regulations.

"Tell Private So-and-so to get his eye pro back on. And by God Sergeant, roll down your sleeves again!"

Link to comment
Share on other sites

50 minutes ago, chrisl said:

So if the camera moves at exactly the right (or really wrong) path, you can't see that the thing is flat.  So what.  Drones aren't going to just be moving in a straight line and not gimbaling the camera.  You're thinking too automotively.  You don't even need heuristics or assumptions if you have a reasonable IRU (which can be done on a chip) - you can fly around and build a 3D model.  My dentist does that realtime now with a stick that they wiggle around in my mouth to make crowns, instead of making a casting.

Tracking without doing individual frame by frame detection is not only possible, but there are multiple ways to do it.  Doing frame by frame and stitching sucks when you have low SNR or are near the resolution limit, but you can improve SNR by taking advantage of relative motion of the camera and scene.

Exactly.  And moving targets are even less likely to be able to spoof with 2D cutouts.

Also, why assume that we'd only be talking about daylight optical recognition with nothing else?  Heat signatures and assessment of mass are very easy to do.  Basic thermal imaging is fairly cheap now, so I expect it will be defacto standard for autonomous drones.

The take-away here is that implementing a detection system is easier than developing a practical spoof.  Very similar to current anti-drone measures, such as netting and cages.  They might help a little, but not a lot and at the cost of something (resources and mobility, for example).

Steve

Link to comment
Share on other sites

49 minutes ago, Butschi said:

Sure, countering my simple trick is doable. We were talking about cheap autonomous that can be done now and a cheap counter to that. In 10-20 years our mobile phones - if mobile phones are still a thing then - will put our present day gaming rigs to shame and given how rapidly AI has developed during the last decade we can only guess what things will look like in 20 years.


 

Quote

 

https://machinelearning.apple.com/research/roomplan

3D Scene understanding has been an active area of machine learning (ML) research for more than a decade. More recently the release of LiDAR sensor functionality in Apple iPhone and iPad has begun a new era in scene understanding for the computer vision and developer communities. Fundamental research in scene understanding combined with the advances in ML can now impact everyday experiences. A variety of methods are addressing different parts of the challenge, like depth estimation, 3D reconstruction, instance segmentation, object detection, and more. Among these problems, creating a 3D floor plan is becoming key for many applications in augmented reality, robotics, e-commerce, games, and real estate.

 

This is a long, fairly technical article about how apple puts together a three D scan of a room. A decent autonomous drone control system would a mostly analogous process.

And as Steve has been emphasizing, in a real war perfection is not a requirement.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...