Insane AI Learned Minecraft – One Step Closer to Simulated Reality…



Minecraft Viki (video wiki) ➜ https://minecraft.viki.gg

The latest AI News. Learn about LLMs, Gen AI and get ready for the rollout of AGI. Wes Roth covers the latest happenings in the world of OpenAI, Google, Anthropic, NVIDIA and Open Source AI.

My Links 🔗
➡️ Subscribe: https://www.youtube.com/@WesRoth?sub_confirmation=1
➡️ Twitter: https://x.com/WesRothMoney
➡️ AI Newsletter: https://natural20.beehiiv.com/subscribe

#ai #openai #llm

source

37 thoughts on “Insane AI Learned Minecraft – One Step Closer to Simulated Reality…”

  1. It’s amazing how fast this runs! I think the speed comes from using consistent seeds for each frame, so it’s not recreating everything from scratch, which helps keep things looking smooth and consistent. Plus, it seems like they might be using interpolation and temporal consistency, which help blend frames together naturally. And running on a TPU really boosts performance, especially with these optimized diffusion techniques.

    Reply
  2. if we live in a simulation, i bet quantum physics not being deterministic is just the ai that simulates us not loading unnecessary data. kind of like not rendering all chunks of a minecraft world at all times.

    Reply
  3. It’s both very impressive and also underwhelming. As nifty as these videos are, the reality is there is no consistency in the world. A tree is there in one moment, then gone next time you look.

    Reply
  4. But you know, like we said about the Chinese, learning and copying is one thing – coming up with novel ideas (games) in this instance, is another.
    So yes while it might be fantastic – we are far from "new" games running on llms (also I think the ESG departments will have a heart attack looking at the co2 footprint from all the gamers in the world having to run this 😅🫣)

    Reply
  5. No object permanence. The model should be learning how to generate the actual state and map of the game from the visuals not just generating some diffusion based image. Kinda silly.

    Reply
  6. Was this video AI edited?
    Really not a fan of this new video style. Complete with the one word captions popping up as you speak. It just feels like filler, rather than placing emphasis.
    And the weird background always there, with a small window showing content.
    Feels off.

    Reply
  7. So, where are we headed in the end, Mr. Roth? Is this to teach ourselves that we are merely characters in a simulation of life, with no chance of peering beyond our universe? So many questions… in time, I suppose.

    Reply
  8. yeah…. i do think that we will use this tech for … architecting and whatever you said… or mainly to generate the Brazzers videos we all dream off to play out the way we want with the faces of whoever we want…. i don't think you checked lately the stable diffusion models people have on Civit Ai…. anyway, yeah sure, if it makes you feel good, we will create virtual sets or architecture or whatever said, yeah yeah, definitely that's what it will be used for.

    Reply
  9. I tried it out. The biggest obstacle to the current version being used as an engine is object impermanence. I think it needs metadata available relating to what's not currently visible. This isn't easy for training from videos, but I'd think would be relatively possible for synthetic training data. For example an "off screen" permanent inventory and 360 fish eye camera. For larger scale coherence I could imagine a form of RAG. If we could estimate the position and direction of view, you could throw in frames from earlier in the gameplay that look in that direction as extra context for what should be visible.

    Reply
  10. I see this as a dream machine for AGI to work out how the physical world works so training can be on going in lieu of upfront only. As an AGI system interacts with the real world every day it could take all those interactions and dream about all the other possible outcomes that didn't happen so to be trained how to react in the future. The AI would do this during downtime or offload the dream process to then integrate the dreams at a later time.

    Reply
  11. The next 5 years are gonna be absolute insanity, let alone the next 10. I for one can't wait for all the advancements everywhere, ethicality and morality dont concern me, as those are just human constructs that we will forever fight over whos "right" and "wrong" when in fact no one is either, objectively. Life will keep going no matter what happens, and im excited to see all that comes from it.

    Reply
  12. if this was trained on game world state as well as visuals, it would naturally be able to interact with actual long term game state. Keep working from there, and you could move towards an AI that can work in any style and with any state, and then you just have a realtime game generator

    Reply

Leave a Comment