Jump to content

Rattenkrieg

Members
  • Posts

    33
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Rattenkrieg's Achievements

Member

Member (2/3)

0

Reputation

  1. Is it really 2 guys developing all of this? Vehicles, animations, UI, textures, sounds, MP infra etc?
  2. Thank you! @37mm Indeed I was able to get the translucent foliage issue to disappear by using the Advanced War Movie shaders, which seem to disable MXAO. One thing that is a bit off-putting is the degree to which metal reflects light to the point where the entire front of an AFV can turn white at certain angles.
  3. Thanks @37mm that makes sense. Could you post your shader preset that you use for the WW2 games?
  4. Any new Reshade presets or tips? I noticed that turning MXAO on makes low walls basically burn through all foliage, creating a series of criss-crossed lines on the screen. Does anyone else see that?
  5. Hello folks - wondering if anyone knows how to increase the draw distance outside of the max settings in the game? Sometimes in other games I have been able to set up config files to override game menu settings, with things such as forcing memory sizes, screen resolutions and draw distances. Thanks!
  6. I couldn't have come up with a better coda. Here are 2 fun little nuggets for the curious. 1) Build your own Terminator Target Acquisition HUD in Microsoft Hololens. 2) Where did that "code" on the original Terminator HUD come from? std::terminate
  7. Ian, but I am discussing AI and DL with you. The Nvidia video and paper are incredible advances that should get game developers very excited. Imagine being able to use a GAN-based SDK from Nvidia to create a photo-realistic WW2 game based on DL from 10,000+ hours (x25fps) of combat footage from WW2 that has been restored by AI to 4K? Imagine being able to train your "pixeltruppen" using Google's (DeepMind) autonomous locomotion framework for rich environments (when it is released)? Just because two people who work in/with AI don't like how the other behaves on a forum doesn't mean there isn't already ample material here for enrichment.
  8. I struggle to see the connection between the suggestion they use data and the inference that there is a recycling bin approach to it. I suggest they start gathering gameplay data and analysing it, because they generate an enormous amount of it (every bullet, footstep and click) and have the competitive advantage of H2H and registering what players look at when they replay WEGO over and over. That's valuable. BFC has been around for 24 years is it? I am sure they will want to be around for another 26 at the very least. They could reach out to universities and get interns who love military, simulation and AI, or a combination of those interns, or various military schools throughout the US (assuming they are a US Inc.) They could apply for an SBIR grant, like this startup did for computational video editing and was awarded $224,734: https://www.nsf.gov/awardsearch/showAward?AWD_ID=1842850&HistoricalAwards=false Here is an example of a resource that my interns are using right now to restore old videos: https://xinntao.github.io/projects/EDVR I chose not to get bogged down in a ML debate about the exact specifics of how a wargame could begin (note: begin, not deploy) exploring the possibilities of DL and AI in general. Firstly, BletchleyGreek mocked me by suggesting I work pro bono for BFC. Secondly, he made a spurious claim about a company where friends of mine work. Neither of those actions merit any response from me. He is comfortable in his knowledge of ML and his cognitive biases and he went on the attack from the first paragraph of his initial response, at which point I decided I will not respond. I'm sorry if that disappoints you. But why wait? Research is the companion of development and DL is evolving so fast that it's better to start sooner rather than later. @IanL yet another sweeping claim. Again, it betrays an embryonic understanding of DeepLearning that drastically limits my willingness to engage. Perhaps he will say he meant "deploy them right now, today." But isn't that what I am being accused of insinuating with my encouragement of BFC to look into AI? That they are insane for not having already done it? On the subject of DL systems for target identification and acquisition: https://digital-commons.usnwc.edu/cgi/viewcontent.cgi?referer=https://www.google.com/&httpsredir=1&article=5125&context=nwc-review Although the entire paper is very interesting, Section 3 discusses autonomous technology in targeting in more detail.
  9. p.s. just in case I am not posting enough Arxiv links to back up my computer generated street cred: https://arxiv.org/pdf/1808.06601.pdf
  10. The TL;DR version. My original statement above hypothesizes a future in which "vast amounts" of gameplay data and computer vision data derived from actual combat footage are processed by NNs to construct autonomous tactical models. It's a forward looking hypothesis of a direction in which wargaming AI may evolve, and something game developers should be encouraged to explore, because I can guarantee you that a day will come in the not-too-distant future where CGI does not mean models built by people. Having seen a Palantir demo in person (which I am sure naysayers will call a closed-loop hype demo based on bull****) of battlefield AI, I consider the convergence of these two fields to be simultaneously inevitable and exciting. This is the kind of work I am involved in (we are a partner but I do not work for Nvidia). What you see in the video below is a world that is constructed by NNs (based on a Generative Network) using Unreal 4. Now just imagine a world where instead of people driving the car through the virtual world, it's an AV and the other cars on which the models were built are also AVs. What starts as pure research will have immensely impactful practical applications within the next 5-10 years in peacetime cities and on battlefields. My first encounter with Computer Vision in business was with a hedge fund that rented several thousand apartments around the world. Each apartment overlooked a street with specific retail brands on it. In the window of each apartment were a multitude of video cameras that recorded every human that entered into and exited from each store. There were many thousands of cameras providing massive amounts of data and a highly accurate predictive model of the retail performance of each brand was derived from the CV data which included the size, colour and number of shopping bags that each identifiable shopper (anonymously tracked) emerged with, contrasted with their profile as they entered. It was also possible to track repeat shoppers even when they changed their clothes, headgear, hairstyles, etc. The hedge fund using this technology was able to consistently beat market predictions for the stock performance. That was nearly 8 years ago. I have the privilege of working in a pretty cutting edge field and I encourage everyone out there to familiarize yourselves with what's coming.
  11. You are an exercise in self-contradiction. I stated that the AI being developed by Palantir as part of their winning DSCG-A bid is "the closest thing I have seen so far" to AI and CV being able to generate autonomous battlefield C&C. You jumped on your high horse and started telling me how wood and trees does not equal forest. You actually state that you have no idea what they are doing but that it is pure bull****. Have you any idea how silly that is? Yet, by your logic, I'm misinformed and hyperbolic? This means you have no idea about - yet simultaneously understand and are up to date with- all of the below. Palantir is eagerly awaiting your CV. SPECIFICATIONS ... Tactical Intelligence Ground Station (TGS): Tactical geo-intelligence PED and targeting node. TGS retains Common Ground Station capability and functionality; upgrades the hardware; adds more moving target identification (MTI), full motion video and imagery exploitation capability; and provides totally integrated stand-alone imagery, MTI and video sensor processing Geospatial Intelligence Work Station: Provides geospatial and imagery analysts within tactical and operational Army units the ability to process, view, exploit, transmit and store geospatial and imagery information via Army area communications from brigade to echelons above corps Operational Intelligence Ground Station: Consolidates the capabilities of the AN/TYQ-224A, GUARDRAIL Ground Baseline and the Tactical Exploitation System Forward Intelligence Processing Center: V1 provides a suite of core PED applications for intelligence analysis and storage. V2 is the basic combat training and division commander’s primary ISR networking; analysis, production system for tasking of sensors PED support https://asc.army.mil/web/portfolio-item/iews-dcgs-a/ And the work of all of the following involved: Lockheed Martin (Denver, CO) General Dynamics (Scottsdale, AZ) ViaTech Systems, Inc. (Eatontown, NJ) Palantir (Palo Alto, CA) MITRE (Eatontown, NJ) Booz Allen Hamilton (Eatontown, NJ) Raytheon (Garland, TX; Arlington, VA) BAE Systems (Arlington, VA) NetApp (Sunnyvale, CA) VMware (Palo Alto, CA) Esri (Redlands, CA) Tucson Embedded Systems (Tucson, AZ) L3 Communication Systems (Tempe, AZ) Dell (Austin, TX) Potomac Fusion (Austin, TX) Redhat (Raleigh, NC) IBM (Armonk, NY) HP (Palo Alto, CA) Leidos (Reston, VA) ManTech (Fairfax, VA) Oracle (Redwood Shores, CA) Microsoft (Redmond, WA) I have seen a demonstration of the following: moving target identification (MTI), full motion video and imagery exploitation capability; and provides totally integrated stand-alone imagery, MTI and video sensor processing. As I stated in my OP, I work in the field of AI and Computer Vision. I fight Palantir on a weekly basis to stop them from headhunting my people. You work in the field of ML and are applying, in my opinion, myopic and narrow thinking in your knee-jerk reaction claim that no self-respecting AI specialist would make unless they were frustrated at being bypassed. The completely unsubstantiated discrediting of a company perfectly fits your definition of a claim that misinforms the public. If you are, indeed, a scientist then it's even more shameful. So that's what you call adding a disclaimer that explains why I object to the overuse of the term "AI"? Again, your logic is insane. You complain that the results of DL experiments such as AlphaZero are overhyped but you are fully OK with the overuse of the term AI to mean anything that is a rules-based decision making framework? Reading your posts is truly a stultifying experience.
  12. Well @IanL I use it as a litmus test. If you claim that, arguably, the world's leading AI defense contractor is "full of pure bull****" don't you think that a detailed set of reasonings is required to back up that statement? He lists a set of reasons as to why ML is, in general, difficult to apply to the dynamic complexities of combat, however when you make an outlandish claim that the established leader in battlefield AI (just ask Raytheon who lost to them) has completely fooled the Pentagon, that is akin to stating that you know exactly what the Pentagon put out an RFP for, what Palantir demonstrated, and why it was accepted. And that the entire thing is a ruse to defraud the US taxpayer. That is an enormously arrogant and irresponsible claim, and perhaps it is lost on you and most other readers. Was I, in fact, "overstepping" when suggesting that the devs start to train NNs to test their capabilities by gathering data? You may say that if you don't understand the potential of DeepLearning. The problem seems to be that BletchleyGeek is an ML researcher (I doubt he is a DL researcher) with a cognitive bias - and the difference is not to be underestimated - and expects me to rebut his statements when in reality the burden of proof is on him to prove that Palantir (and by extension the US DoD and CENTCOM) are full of "pure bull****" in the context of AI for warfighting application. The irony is that he then goes on to say just how amazing DeepLearning actually is, however nobody seems to notice the evolution in his position.
  13. It's all well and good to post links to papers (just FYI the majority of the most interesting papers published on AI in 2018-9 are in Mandarin). I'm still waiting for @BletchleyGeek to back up his claim that Palantir ($800 million contract to build a DeepLearning based warfighting system for the US Army) is full of "pure bull****". How anyone can make a ludicrous claim like that and get away with it speaks volumes.
×
×
  • Create New...