Jump to content

How Hot is Ukraine Gonna Get?


Probus

Recommended Posts

8 minutes ago, billbindc said:

Crucially, it is these unmanned systems – such as drones –  along with other types of advanced weapons, that provide the best way for Ukraine to avoid being drawn into a positional war, where we do not possess the advantage.

"avoid"?

Edited by JonS
Link to comment
Share on other sites

4 hours ago, Battlefront.com said:

The scary part is what we're talking about is not even close to the bottom.  Autonomous nano might not even be.  Autonomous nano delivered gene splicing... that's likely the bottom because when we reach that we're probably done as a species.

Gentlemen, I hate to repeat myself but when the drone discussion comes back again and again I cannot withstand the voices in my had screaming "it is in the book! It is in the book". https://www.everand.com/audiobook/636867143/The-Invincible 

Hard SF from 1964, so give it some slack  - there are some typically anachronistic scenes, e.g.landing a spaceship looks like a cross between landing a Jumbo Jet and docking a ship. But otherwise brilliant. Link to audiobook, because English online bookshops for reasons unknown contains spoilers.

Link to comment
Share on other sites

Hey grogs, for the benefit of refuXeniks (like me), it seems that the excellent mirror site Nitter has been killed by another policy change, perhaps for good this time. If you click on a nitter feed you get a 'certificate expired' error message.

Explanation is here (though I'm not a techie and can't verify the details).

"Nitter currently relies on the mass generation of guest accounts, a weird anonymous form of account that was only supported by old versions of the Twitter app. Creation of them was totally disabled today, so every nitter instance will be dead in under 30 days (when they expire). Scrapers apparently also relied on this, as every public nitter instance was being hammered by scrapers earlier. Instances will probably shut down quite soon unless someone finds another way to create tens of thousands of accounts in an automated fashion for free."

That said, there's another mirror site that still seems to work; you can follow the feeds, but can't get to individual posts (or comments):

https://n.opnxng.com/DefMon3

(or whatever handle you're looking for)

Edited by LongLeftFlank
Link to comment
Share on other sites

9 hours ago, Haiduk said:

We have a shortage of artillery shells, so FPV our only hope

UKR troops repelled company size assault of Russian troops near Novomykhailivka (Maryinka - Vuhledar sector), using mostly FPV atatcks. 12 Russian vehicles were destroyed/damaged/abandoned

UCAV company of 72nd mech.brigade scored 9 vehicles, 2nd mech.battalion of 72bd brigade - 1 and ATGM company of brigade's AT-battalion - 1

 

A screaming comes across the sky....

Good lord, mech really is toast, isn't it? absent a phase change in ECM

Time to get busy standing up those leg battalions.

...Well, maybe not entirely leg, mobility enhanced with these things and various other nimble hard-to-hit ATVs, plus the aforementioned jetpacks.

 

****

P.S.  Cool score on the video, any idea who it's by?

Edited by LongLeftFlank
Link to comment
Share on other sites

11 minutes ago, LongLeftFlank said:

****

P.S.  Cool score on the video, any idea who it's by?

Козаки йдут (feat. Анна Булат) - there's a pretty cool app called Shazam that I keep recommending, it's able to find titles of LOTS of random music I stumble upon after playing for few seconds

Link to comment
Share on other sites

11 hours ago, Haiduk said:

We have a shortage of artillery shells, so FPV our only hope

UKR troops repelled company size assault of Russian troops near Novomykhailivka (Maryinka - Vuhledar sector), using mostly FPV atatcks. 12 Russian vehicles were destroyed/damaged/abandoned

UCAV company of 72nd mech.brigade scored 9 vehicles, 2nd mech.battalion of 72bd brigade - 1 and ATGM company of brigade's AT-battalion - 1

 

So you think maybe FPVs are working better than artillery at some point?  I mean that is two full companies worth of vehicles right there.  Probably at a fraction of the cost of artillery (when one takes overhead into account).  That is one helluva “only hope”.  Enough of these and the UA might just get the RA to buckle yet.

Link to comment
Share on other sites

That was exactly what I wondered after seeing recent clips of Ukrainian units from the front.

Imagine you were a battalion commander and someone came up to and said:

"You can have 1000 pieces of supply - either 152mm/155mm shells (no PGM) or 1000 FPV drones or a mix between these of your choice, just never more than 1000."

Disregard the costs, imagine that it is a slider you can adjust in CM, and you already have both experienced artillery gunners and experienced drone teams. Just keep in mind that EW exists.

Have we reached a point where the FPV drones would be more desirable than the shells?

Edited by Carolus
Link to comment
Share on other sites

https://x.com/TreasChest/status/1753285123418853818?s=20

 

Link doesn't want to embed, but it shows a pic of number of Lancet strikes per month and that the numbers went down after a Ukrainain strike on a factory for optics.

Deep strike works. Deeper strikes work harder.

Strategic bombing affects the tactical situation for the grunts on the ground.

Edited by Carolus
Link to comment
Share on other sites

On 1/31/2024 at 2:17 PM, Vet 0369 said:

Not really sure what “early assault failures” you are referring to. The only amphibious assault “failure” that I remember reading about was the initial Japanese amphibious assault on Wake Island shortly after their attack on Pearl Harbor. The Japanese launched an amphibious assault that was defeated by the U.S. Marine Detachment, who not only defeated the only amphibious assault to fail (in the Pacific), but also sank a Japanese destroyer while they vwere doing it.

Yes, the Marine and Army amphibious assaults were very costly for the Marines and Soldiers who made them (especially during the Navy’s island hopping campaign), but NONE of them “failed!”

I'm speaking quite granular, at the level of battalion or lower. I was highlighting the negative effects from the lack of experience and appropriate gear. Also,  while the overall attack or battle was eventually won, the intital waves of a given battle early in the war often failed in their immediate tactical objectives, becoming trapped,, bogged, and decimated or simply stalled. Follow on waves usually broke through,, by simply grinding the defenders down. 

But as lesson's were learned and implemented the success rate of those early assault waves ratchetee up relentlessly

Link to comment
Share on other sites

20 hours ago, Haiduk said:

Well, I was too busy thees weeks, now I back ) 

France recently handed over 40 SACLPs. I veeery doubt 12 of them were wasted in such way

Local TGs yesterday reported missile strike hit three fighter jets Su-27 and Su-30, one of them was destroyed and killed at least 8 men. But today ASTRA TG claimed communication post of Belbek airfield was hit with two missiles. 

SCALP is impacting - SAM missile is launching %)

 

Follow-up about the strike on Belbek airfield. Russian General Tatarenko was killed along with 9 others.

Imagine the United States was losing Generals at the same rate in a war. It is bonkers.

Edited by Carolus
Link to comment
Share on other sites

4 minutes ago, Carolus said:

Follow-up about the strike on Belbek airfield. Russian General Tatarenko was killed along with 9 others.

Imagine the United States was losing Generals at the same rate in a war. It is bonkers.

Or with a Russian spin on events:

General Tatarenko bravely personally intercepted an incoming missile and prevented it hitting the airfield. He is said to be lightly injured. 

Link to comment
Share on other sites

The term "AI" is doing a lot of legwork in this discussion and, I think, is being used to refer to different things a lot of the time.

As previously noted, "AI" of sorts is already integrated into weapon systems at the level of smart munitions and augmented feedback to operators of FPVs, for example.  The "AI" I think most people are concerned about/interested in in the context of the next few years is the kind of AI that will be able to meaningfully automate processes which have, to-date, been too complicated or nuanced to take away from human beings:  mainly target selection and prosecution within a defined combat zone.  That's all fine and those are the types of "AI" which will (I think inevitably) be integrated into our next generation fighting systems.

Beyond that there is the kind of AI that starts beying employed against the enemy's AI.  For me, this is where things start to get interesting.  I think at this point AI is at least as heavily employed in deceiving death swarms and Terminators as it is in driving them and that means that warfare will become extremely dynamic: the best way to defeat an AI-driven war machine is to make sure it doesn't recognise you in the first place and there are countless unimagined ways of making that happen.  War, warriors and weapons will only appear recognisable to our eyes for as long as AI doesn't get too good.  Once it does start to get there, we will simply change what they looks like (hold that thought).

And then we start saying things like:

17 hours ago, OBJ said:

The time when wars are fought exclusively by AI systems is still a ways off, if we ever get there.

People will be involved in warfighting for a long time to come.

Now please don't misunderstand me; I think that this is an interesting thought and idea to discuss but I also think that, in an effort to scout ahead, it has not-altogether-deliberately strayed a bit off-map.  "AI" does not mean the same "AI", any more.

If we ever get to the point when "wars are fought exclusively by AI systems" or when people are not involved in warfighting then I see that world reflecting one of two possibilities:

  1. People no longer exist.  If they did exist then they would still throw shade, b***hslap each other and get into large scale brawls which would take the sociological place of whatever warfare is now that AI has excluded us from in the future and that would then become the new, real warfare.  In other words if we are ever excluded from "warfare" because AI is just too damned efficient and lethal then that will suddenly solve absolutely nothing and we will go somewhere else and start fighting again, without it.  The people who are unsatisfied by AI-controlled warfare will simply change warfare to be something else entirely.  Or;
  2. People get imaginative enough to realise that AI isn't best used to target enemy machines with explosives any more than a nuclear reactor is best used to heat the cavalry's stables.  If AI is in such a derivative state that wars could theoretically be fought by it to the exclusion of actual people then we should find a far better use for Marvin than what convention would currently consider the military domain.  If AI is this powerful it should be working primarily in the information domain, ironing out conflicting certainties (thanks for introducing useful terminology, Capt) at the level of the information people absorb and believe on a day-to-day basis.  In this way AI should be winning wars before we even know they've begun and yes, that means that, as far as we're concerned, AI should be preventing warfare altogether.  To the extent that such a thing may not be possible, AI should work to mitigate whatever level of conflict turns out to be necessary between human beings but that will probably still mean allowing us to do it ourselves in order to make sure something actually gets resolved in the process.

Tldr: I think that, if AI advances to the point that it could exclude us from warfare altogether then the political and natural sciences, healthcare and economics will be the fields upon which those wars are won, not the trenches and treelines around Avdiivka.

Link to comment
Share on other sites

26 minutes ago, Tux said:

The term "AI" is doing a lot of legwork in this discussion and, I think, is being used to refer to different things a lot of the time.

As previously noted, "AI" of sorts is already integrated into weapon systems at the level of smart munitions and augmented feedback to operators of FPVs, for example.  The "AI" I think most people are concerned about/interested in in the context of the next few years is the kind of AI that will be able to meaningfully automate processes which have, to-date, been too complicated or nuanced to take away from human beings:  mainly target selection and prosecution within a defined combat zone.  That's all fine and those are the types of "AI" which will (I think inevitably) be integrated into our next generation fighting systems.

Beyond that there is the kind of AI that starts beying employed against the enemy's AI.  For me, this is where things start to get interesting.  I think at this point AI is at least as heavily employed in deceiving death swarms and Terminators as it is in driving them and that means that warfare will become extremely dynamic: the best way to defeat an AI-driven war machine is to make sure it doesn't recognise you in the first place and there are countless unimagined ways of making that happen.  War, warriors and weapons will only appear recognisable to our eyes for as long as AI doesn't get too good.  Once it does start to get there, we will simply change what they looks like (hold that thought).

And then we start saying things like:

Now please don't misunderstand me; I think that this is an interesting thought and idea to discuss but I also think that, in an effort to scout ahead, it has not-altogether-deliberately strayed a bit off-map.  "AI" does not mean the same "AI", any more.

If we ever get to the point when "wars are fought exclusively by AI systems" or when people are not involved in warfighting then I see that world reflecting one of two possibilities:

  1. People no longer exist.  If they did exist then they would still throw shade, b***hslap each other and get into large scale brawls which would take the sociological place of whatever warfare is now that AI has excluded us from in the future and that would then become the new, real warfare.  In other words if we are ever excluded from "warfare" because AI is just too damned efficient and lethal then that will suddenly solve absolutely nothing and we will go somewhere else and start fighting again, without it.  The people who are unsatisfied by AI-controlled warfare will simply change warfare to be something else entirely.  Or;
  2. People get imaginative enough to realise that AI isn't best used to target enemy machines with explosives any more than a nuclear reactor is best used to heat the cavalry's stables.  If AI is in such a derivative state that wars could theoretically be fought by it to the exclusion of actual people then we should find a far better use for Marvin than what convention would currently consider the military domain.  If AI is this powerful it should be working primarily in the information domain, ironing out conflicting certainties (thanks for introducing useful terminology, Capt) at the level of the information people absorb and believe on a day-to-day basis.  In this way AI should be winning wars before we even know they've begun and yes, that means that, as far as we're concerned, AI should be preventing warfare altogether.  To the extent that such a thing may not be possible, AI should work to mitigate whatever level of conflict turns out to be necessary between human beings but that will probably still mean allowing us to do it ourselves in order to make sure something actually gets resolved in the process.

Tldr: I think that, if AI advances to the point that it could exclude us from warfare altogether then the political and natural sciences, healthcare and economics will be the fields upon which those wars are won, not the trenches and treelines around Avdiivka.

That last part is really interesting.  I have been wondering what happens when an entire nation simply hands off its entire economy management to AI - interest rates, investments/divestments, trade policy etc.  Will there come a point where AI becomes our gods, benevolent and truly objective and just?

Of course since we created them, those gods are bound to have flaws…some no doubt fatal.  As it relates to war, there are some certainties that can never be fully offloaded to AI - this is the part Clausewitz missed.  There are deep irrational certainties within us - culture, identity, suspicion and superstition.  AI could assist in smoothing these to some extent, but they could not remove them completely without removing us (note: this is less likely Skynet extermination and more pushing our evolution until we are no longer human).  

Why we go to war is an incredibly complex concept.  Clausewitz said “politics!”  Which is true.  But what is politics?  What drives politics?  In his mind it was some rational Prussian utopia of nicely compartmentalized structures.  In reality it is a human soup that makes chaos queasy.  We go to war because an imaginary Type VII civilization says we should.  We go to war because we imagine what other people are doing so hard that it becomes reality - our uncertainty becomes certainty. (Seriously how messed up is that.)

We go to war, apparently, because we think we are better than everyone else.  Which is a poor attempt to cover over our own uncertainty, which left untreated, can tear a society apart.  So until AI can effectively “solve for human” we are likely still going to see violent collisions of certainty.  

Link to comment
Share on other sites

1 hour ago, Carolus said:

Follow-up about the strike on Belbek airfield. Russian General Tatarenko was killed along with 9 others.

Imagine the United States was losing Generals at the same rate in a war. It is bonkers.

Actually, we (the US) may have excess capacity, or past expiration stock.

In the Antal video provided by Hapless, Antal recounts a conversation in which a US General asserted China could never hit Hawaii. At the pace warfare is changing, the majors maybe better suited than major generals.

 

Link to comment
Share on other sites

18 minutes ago, The_Capt said:

That last part is really interesting.  I have been wondering what happens when an entire nation simply hands off its entire economy management to AI - interest rates, investments/divestments, trade policy etc.  Will there come a point where AI becomes our gods, benevolent and truly objective and just?

I do not see that coming. Wars are existential affairs for people and may be for the whole nations. They belong to the class of events which may induce people to give up their decision power pretty much in everything if they are really scared for their lives or personal security. Including appointing an AI as dictator.

But all economy? Would people see it as desirable? Using a popular cliche, if my local terminal of the wise and benevolent AI tells me to "eat bugs and be happy" because it calculates that the resultant savings at the overall economy level can be repurposed for health service and it statistically extends my life expectation by half a day, I would tell it to F.O and pour a bucket of water into it just to be sure. Would a US president be more inclined to give up the choice of e.g. lowering taxes in a pre-election year? Or Stalin the possibility of tactically inducing starvation in some regions? I would think not. 

AI-controlled economy could be the basis for the ultimate planned economy, Marx' wet dream. Geographically and historically, in most cases though people tend to reject "strong" planned economies and prefer to retain a significant degree of economic freedom. Would AI superhuman management ability change that? I doubt so. 

 

Link to comment
Share on other sites

7 hours ago, Carolus said:

That was exactly what I wondered after seeing recent clips of Ukrainian units from the front.

Imagine you were a battalion commander and someone came up to and said:

"You can have 1000 pieces of supply - either 152mm/155mm shells (no PGM) or 1000 FPV drones or a mix between these of your choice, just never more than 1000."

Disregard the costs, imagine that it is a slider you can adjust in CM, and you already have both experienced artillery gunners and experienced drone teams. Just keep in mind that EW exists.

Have we reached a point where the FPV drones would be more desirable than the shells?

I think the only real advantage shells still have is that they have much longer range than the average small drone.

Link to comment
Share on other sites

Jeesh, I really need to work on my communication skills.

Tux you have given us a very thoughtful post, and for the small part of it I played a role in... 

1 hour ago, Tux said:

And then we start saying things like:  19 hours ago, OBJ said: The time when wars are fought exclusively by AI systems is still a ways off, if we ever get there. People will be involved in warfighting for a long time to come.

 

My thoughts were:
1. It's unlikely humans will ever cede all of warfighting to AI
2. If they do, it will be humans intentionally ceding warfighting to AI

I think we have consensus, majority agreement, humans have already ceded parts of warfighting to AI, i.e. autonomous AD. My comment was based on my view the trend of handing off decision making and execution to AI will continue in ever more warfighting functions, as in the fields you list of natural sciences, healthcare and economics. That's not to say the entire field will be ceded to AI.

Edited by OBJ
Link to comment
Share on other sites

10 minutes ago, Maciej Zwolinski said:

AI-controlled economy could be the basis for the ultimate planned economy, Marx' wet dream. Geographically and historically, in most cases though people tend to reject "strong" planned economies and prefer to retain a significant degree of economic freedom. Would AI superhuman management ability change that? I doubt so.

And we are back to human irrationality.  You are correct, we would rather cling to the chaos and uncertainty of "freedom."  If an AI economy could guarantee an end to poverty and unpredictable recessions/depressions, completely predictable growth and wealth distribution that made sure we never saw class friction (we are basically talking Star Trek) - I have zero doubt we would march in the street to reject it completely...now that is irrational (or perhaps relatively rational is a better term)

We are such an odd species.  Uncertainty can become a certainty we cling onto and will fight for.  But here is the thing...no one out there is crying for "electricity freedom!!".  We have already shifted enormous level infrastructure management to autonomous AI - you know, the stuff that really freakin matters?  So we are comfortable with that, but "God help you if you touch my money". 

It is a miracle we made it this far.... 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...