Jump to content

Butschi

Members
  • Posts

    1,151
  • Joined

  • Last visited

  • Days Won

    4

Butschi last won the day on November 14 2023

Butschi had the most liked content!

3 Followers

Profile Information

  • Gender
    Male
  • Location:
    Germany
  • Interests
    Wargaming, history, science, RPGs, good discussions, ...

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Butschi's Achievements

Senior Member

Senior Member (3/3)

1.7k

Reputation

  1. Yeah, I get it. Showing Orban the middle finger feels great, I'm sure, and personally I congratulate Selensky on it. But that doesn't pay bills, EU financial aid does. So, aggravating Orban might not be the smartest move when he has veto power and you depend on foreign aid to stay in the fight. No matter how bitter it is and whether he might veto whatever anyway. Especially when you just have to sit back and watch the guy embarassing himself and bringing up the rest of the EU against him, even more than he usually does. Well, let's just hope the loss in revenue for Russia makes up for it.
  2. I'm not so sure this was a good decision. While certainly understandable, what does it actually gain Ukraine? How bad does losing that source of income hurt Russia? Orban took a lot of flak for going on his so called mission of peace and pretending to speak for the EU while having no mandate whatsoever. So, had Ukraine just sat back, Orban would have aggravated the rest of the EU (again) and embarrassed himself achieving nothing. There were already nice articles how Russia rewarded this peace mission with another wave of bombing civilian targets in Ukraine. Now Orban, who more or less controls Hungarian media, can spin it as Hungary (not just Orban!) being punished for trying to negotiate peace. No doubt Ukraine was ordered by evil USA, UK and EU who don't want peace. Ties in perfectly with Orban's narrative that EU is to blame for everything. The other Putin friends all over Europe will merrily help Orban spread the word.
  3. @poesel only pointed out that neither Trump nor Biden will change their stance on Ukraine because of the attempt so it is doubtful that it will have any significant impact on the war. So it adds no new information in addition to what was discussed as nauseam already.
  4. Aren't you shifting the goal posts now? You claimed that "there really isn't any reasoning behind the answers" of LLMs. Providing sources, etc. is not to really a part of reasoning, especially for common knowledge. If I ask you to tell me how fast a spacecraft goes after a given time with speed v and acceleration a, I don't expect you to tell me what textbook you had at school. In fact I just asked ChatGPT this exact question (without the textbook part): Granted, this is a simple task but that isn't the point. The reasoning is perfectly ok, as far as I am concerned. I also didn't see any mansplaning or whatever. But even that would be beside the point. You claimed that these models are incapable of any reasoning. None of the papers you provided (interesting ones, really), actually concluded that LLMs are incapable of reasoning, just that their performance for abstract problems or ones not in included in the training data are often not very good. But frankly the same can be argued for humans. Back at school the transferring knowledge to a new problem was what we needed to get an A not to pass the test ... I'm sorry but you didn't convince me here. As to You are right, this whole machine learning business is basically an increasingly sophisticated way of fitting parametric functions to data. But "human leaning" is really not very different, right? Neurons and synapses work a little different than neural networks in ML but the principles (leaning weights for connections, i e. synapses, between neurons) is very similar. Now, we humans, although our brains are also just parametric fits, are capable of forming models of how our world works. Often just heuristics but even learning algebraic formulae. As you said yourself explainability is a thing in ML. Those LLMs are huge, with billions or even trillions of parameters. I think right now they are more or less black boxes. That goes both ways, though. If we can't tell how and what knowledge is stored where, we can't really say what isn't in there. So, I didn't want to spread the gospel of AI, I've argued myself at some points that AI is not as far as some think. On the other hand I'd really like to stress that LLMs, with all their flaws, are a huge leap in capabilities compared to what we had just a few years back - so "LLMs suck" is really something I can't agree to. Ok but I guess Steve will very soon tell us to no further derail this thread.
  5. Technically, I was talking about the attention mechanism in transformer models. You can use a lot of other architectures in order to predict a word as a function of a bunch of other words that don't have this feature. They also have worse performance, of course. Excuse me but that sounds like a really big claim to make, given that a) there a papers out there that research reasoning capabilities in depth and come to the conclusion that LLMs have at least basic capabilities, b) many in this field use LLMs precisely because of their reasoning capabilities and c) you can actually ask e.g. ChatGPT to reason how it arrived at an answer. Can I kindly ask you to back up your claim? Sure, LLMs are not AGI and their answers and reasoning are often flawed. That doesn't imply they can't do any reasoning at all, though.
  6. I think you are both oversubscribing a little too this popular idea that large language models "just" learn to predict the next word (and based on garbage Internet data), so they are stupid. While the predicting the next word part is true, these models train to predict the next word based on learning which words are important to which other words. So they are basically learning context. The novel thing is - and this is why those models are increasingly popular besides just chatting - they can reason about their answers. Sure at the and of the day this is still just fitting parameters to data but our brains work in a not much different way. Also, while ChatGPT for instance was trained in an unsupervised way with Internet data, a lot of supervised training came afterwards - talking to humans who reviewed its answers. As for hallucinations - I doubt this is just "well Internet is garbage so chatbot answers garbage." I agree that it is about a model or concept of reality. Humans generally (but not always) know the difference between reality and imagination. That is because we have sensors, i.e. eyes, ears, etc. A chatbot doesn't have that... yet.
  7. No, that post explicitly says While I wouldn't rule out Russian sabotage, workplace accidents, even lethal ones, happen frequently. They are the reason for many oh so evil regulations and blaming the Russians does sound a bit like a cheap excuse. If we are speculating, the first question should be if this "pattern" actually exists. Meaning: Has the rate of incidents in Western defense related industry increased, compared to, say, pre-war rates. Next question then is: If so, are those incidents related to ramping up production? New production lines or even facilities, especially when built quickly, invariably lead to more accidents. And let's not forget, even if Russia is to blame for some of those incidents, it doesn't mean they are responsible for all or even the majority of them.
  8. Generally decentralizing the power grid would be the way to go. So, yes solar panels everywhere. Well, we've kind of been in the process of doing that for years, so that is not going to happen in half a year. Much less so keeping in mind that China is the no. 1 supplier...
  9. I'm no expert on the French system but as far as I understand it, the French president has the same or more executive power than the US president.
  10. First results for the election to the EU parliament. With a strengthening of right wing parties (many of which are... less than opposed to Russia), things may get "interesting" in the near future. https://www.reuters.com/world/europe/european-parliament-poised-rightward-shift-after-final-voting-2024-06-09/ In France, Le Pen's party clearly won the election. As a result Macron has dissolved the parliament. https://www.politico.eu/article/eu-european-election-results-2024-emmanuel-macron-dissolve-parliament-france/ Edit: Note that EU institutions are the second largest donor for Ukraine, so the outcome of this election may be a similarly big deal for our discussion here as the next Presidential election in the US.
  11. There are some for different time periods but the ones I know are more on a grand strategy level than tactical. Compared to wargames where at least parts of the simulation aspect are physics (and thus calculable) a real small scale simulation seems difficult. You'd have to model lots of psychology and social interactions. And frankly, psychology is one of the weaker parts of CM (not that I know of other games that do it better).
  12. This. The way some here have started treating Macron as the Second Coming always makes me raise an eyebrow. Remember that Macron was called out for not doing his part - by Scholz no less. And for good reason, French aid to Ukraine is nowhere near what others have given, neither in relative not absolute terms. It is very clear that this is about France's position, power and influence in the EU and NATO and probably related to upcoming elections.
  13. Fair enough. I for one just feel that my knowledge is somewhat inadequate compared to some people here with an actual military background. Which obviously doesn't prevent me voicing my opinion.
  14. To be fair, that is true for most of us here who are mere armchair generals. Which usually doesn't keep us from judging military operations on a daily basis and with a lot of passion.
  15. I guess one takeaway here should be that nowadays everyone and their dog does AI. It's not an arcane art anymore. You can just buy the stuff from various companies. For the drone discussion, I think the middle one is most worth taking a look at: Identifying and tracking different types of objects. Looks great and very stable. Some caveats: As far as I understand the description, the algorithm runs offline, i.e. on pre-recorded sequences. In that case you have a more or less infinite computing budget and no latency restrictions. And still, if you take a close look at the cars taking the left turn on the left side, the tracking stops immediately once the car is occluded by a few leaves. The range is also not so great, tracking stops roughly under the bridge... at maybe 100m? So, no chance to identify a tank a kilometer away partly hidden by bushes or trees. But, again, this video is old (2017) and we probably can do better today.
×
×
  • Create New...