Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


BletchleyGeek last won the day on April 28 2018

BletchleyGeek had the most liked content!


About BletchleyGeek

  • Rank
    Senior Member

Contact Methods

  • Website URL

Profile Information

  • Gender
  • Location:
    Melbourne, Australia
  • Interests
    Computer Science, AI, History, Wargaming

Recent Profile Visitors

2,077 profile views
  1. Thanks @BFCElvis for the update, the Italy games have always had a temperamental TO&E, so I keep my fingers crossed that the tidying up doesn't turn into a death march.
  2. I was about to use words to that effect. He's a very cautious and meticulous player. We played a game of the Market Garden Devil's Hill scenario a few years back and it took a while to find stumble upon his troops.
  3. That paragraph could totally be in a book by Julian Barnes Nicely written, Ben.
  4. Looking forward to see and hear the Indian Army in its full digital glory... they fought probably the hardest and toughest of campaigns in Italy and Burma. @Warts 'n' all I am personally looking forward to an Scottish voice pack.
  5. Not on the commercial engines, but Graviteam's engine does.
  6. I was about to ask "where is it set" but I will wait for the DAR/preview (if that's happening this time around).
  7. For creating colourful, diverse animations, yeah, that is for sure. One of the issues in applying AlphaZero to professional wargaming is dealing with the problem of time, space and coordination of assets to, say, evaluate possible COA implementing a Strike mission. Random exploration, which is what AlphaZero does, has exponentially diminishing probabilities of landing on a particular game state via a valid set of moves. In a game with two choices, the probability of a random exploration heuristuc generating a particular sequence is very easy to calculate and vanishes to zero as the length of the sequence grows. It worked for Alpha because games have nice terminal states where you get a very simple payoff, and you can extract easily features that correlate well with possible game outcomes. David Silver had a lot of work with handcrafted features before turning to CNNs. And still, having to play games to completion is probably the most time consuming part of the execution of the Alpha Algorithms In other settings you need a heuristic, or base policy, to show the way. That is, old good "AI" techniques whose practitioners call model based search and optimization...
  8. I have been ignoring Rattenkrieg for a while when became apparent he was not interested in having a technical discussion. There is, in my mind, a very clear difference between promising and delivering, and between mocking and challenging. I sketched a very reasonable benchmark to test Rattenkrieg's assertions, and asked for a budget. Research does not happen for free, and what Rattenkrieg proposes is a research project, not a software development project. The experience of the Leela project is well documented, and the challenges of integrating modern machine learning. Maybe he thinks I am mocking him because he has an idea of the costs involved. There is active research on applying deep learning to professional wargaming. The problems that are being found are lack of availability of computational resources to match results within reasonable time frames, issues with the asymmetry of military wargaming scenarios that do not fit well with the well behaved and neat game theoretical structures of classic boardgames, issues with representation as CNNs cannot exploit high dimensional state representationsto come up with generalised features, and more, like the problem of composability I discussed earlier. People have been trying quite hard on this for a few years now. Contesting wild claims by a company is not "over the top". We should all do it more often. But I have no "friends" in that company either. The USARL has been working for a long time on mapping out the challenges posed by the so called the Internet of Battle Things https://arxiv.org/abs/1712.08980 So let's say that, considering the claims made by Rattenkrieg, and looking at the challenges, I remain skeptical that either Palantir can solve that problem, or that they need to solve that problem in order to provide the US Army with next gen information processing and communications systems. Confronting the CV community with proof that well known approaches to object recognition suffer from massive overfitting was probably not going to go down well, but the facts are that they have been shown to break down when inputs are trivially perturbed with noise. Hence why they are not suitable technology, by themselves, to provide automatic target acquisition and I stand by my remarks. They can be easily spoofed and can pretty much track anything - see the examples on the papers I linked on the first post. So much for "sweeping statements" really. There is even a new field of research, very hyped up, called "Adversarial Machine Learning" just studying this, with varying levels of success. The locomotion suite by Google is a very useful research platform. Still, there are fundamental limitations on what kind of tasks can be learnt by neural networks without relying on specialised architectures and carefuk selection of parameters, search and optimization algorithms to guide and implement the training process. The work on Universal Planning Networks illustrates this. The locomotion results are relevant for the video games industry, and probably there is already work published on SIGGRAPH about it. Having an animated 3d character go from adjacent nodes in a navigation mesh is a remarkably easier problem than having humanoid robots going up the stairs of a randomly chosen house in America. The videogame developer has literally godly powers to shape physics in such a way that everything runs in real time and looks goid enough. And with this I have pretty much said my piece. I hope some of the folks here found this readable and informative. If anybody wants to know more about any of the above they can PM me or ask for further discussion.
  9. Always a good cartoon to keep in mind @sburke. Thanks for that post, too
  10. There is a big difference between gaining a contract and delivering a product that works as expected. It also does not seem to be claiming to provide what Mr. Rattenkrieg says it does, or definitely not one of the main deliverables. Somebody just pulled out the technical card on another forum member, and made arguments and claims which seemed to me to be both misinformed and hyperbolic. I pointed to this person how those claims were at odds with the facts. Here I am wearing my scientist hat, not my BletchleyGeek the forum member, as I think it is my civic duty to contest claims that I perceive to misinform the public. As usual on the Internet, this person just run to the top of the hill ready to die on it while refusing to engage, not actually answering any arguments or providing any reference. That is a waste of time and a dissapointment, as the discussion went onto technical details. I respected him enough to present some substantial arguments to discuss, and being way more intellectually generous than him. Thanks very much for calling me a 12 year old and a nerd... how is that supposed to be helpful? I do not suffer fools gladly when talking shop, and if you couldn't follow the discussion I am sorry you felt left out and excluded, but there are limits to the amount of time one can put into being didactical.
  11. Cheers @IanL, happy to hear someone found the discussion interesting. The technology is there, and in a very useable state - much more than it ever was in the late 1980s and 1990s when first wave of neural networks practical algorithms and applications came up. Here's another paper you may appreciate reading (and sorry if I am making too many assumptions about your background: https://papers.nips.cc/paper/5656-hidden-technical-debt-in-machine-learning-systems.pdf
  12. For the benefit of other readers interested on what it takes to reproduce the success of the Alpha algorithms it is useful to consider that Deepmind reported to beat Stockfish, a state-of-the-art automated chess player which has been developed for ten years, in just four hours (link to paper). That's impressive but it is also useful to remember that during those 4 hours, AlphaZero was trained quoting from the paper. A distributed, open-source effort to replicate those results against Stockfish took over a year, painstakingly following Deepmind approach https://en.chessbase.com/post/leela-chess-zero-alphazero-for-the-pc and they released it as a plugin for Fritz. Having a beefy GPU for computing the opponent moves is highly recommended.
  13. This sentence proves that you just didn't understand anything of what I wrote initially, or that you know much or anything about how to apply deep learning to any practical problem from scratch, or you have read any of the papers. AlphaZero was initially demonstrated on the game of Go, and then Deepmind renamed the approach as Alpha or AlphaX where X is the name of the game they have deployed the same algorithm and changing the model of the game rules and states. The "zero" comes from not using games with humans for bootstrapping, which have zero to do with all the other questions I raised, which are all design choices made by humans. Don't bother coming back. Save your time for productive endeavours.
  • Create New...