Another Case for a Simulated Universe

I’m willing to argue we’re beyond the event horizon—that is, beyond the point of no return—of creating, and perhaps even discovering, wholly separate realities around us.

I mean this threefold:

  1. in a physical sense, through the environments of Mars, the moons on distant planets, and interstellar worlds still further beyond us;
  2. a social-emotional sense—the “waking up” of an international culture, through the inevitable connectivity of the Internet;
  3. and a technological sense, i.e., simulated worlds and artificial intelligences.

We are in a cyborg age that combines human and machine learning, but because we’re at the beginning of the wave—and it’s the kind of wave that truly challenges our egocentric place in our universe—we only feel the anxiety of the future we’ve set in motion; we aren’t embracing it yet.

If you’re up for it, I’ll take you down the rabbit hole, to help you see what I mean.

Janelle Shane, Scientist and Programmer

Today I read the article “When algorithms surprise us” by Janelle Shane, which incited me to write about this topic more. (I already explore it fictionally in Emergence No. 7.)

Quite a bit of argument lies in the shadows of that moment when artificial intelligence, or a machine intelligence, surprises its creator. For example, in the article I’m linking at the end of this post (so you can do your further reading!), Janelle Shane writes:

Sometimes the programmer will think the algorithm is doing really well, only to look closer and discover it’s solved an entirely different problem from the one the programmer intended.

And when we’re standing on the plateau of this event, we’ve already entered the realm of creating a curious, innovative intelligence that’s unlike us; we learn things from artificial intelligence we would’ve have accessed through biological intelligence, so our new intelligences are contributing to the foundation of our technological world. They’re beyond the realm of a tool leading human thought.

I’ll reiterate another way. If a technologically savvy creator is surprised by its artificial intelligence, I’d say that’s an emergence of a new form of life. It may not be AGI (artificial general intelligence, which is kind of like human-level artificial intelligence), but it’s getting pretty damn close, enough we ought pay attention.

In our reality, we’ve benefiting from independent realities with independent intelligences that are not like us.

While thinking about this topic can easily lead to Pandora’s Box opening inside your stomach—at least, if you let the implications sink in—it gets way weirder (and way cooler?) when you think about the artificial worlds we’ve created, and how the artificial intelligences—that is, how our curious and alien intelligences exploit the world around them—practicing, in a weird way, their own form of science!

Shane describes this well when she says:

Potential energy is not the only energy source these robots learned to exploit. It turns out that, like in real life, if an energy source is available, something will evolve to use it.

Essentially, if the exploitation is there for the artificial intelligence to learn, then machine learning (like evolution) says it will become exploited.

So here we’ve created an environment—let’s just call it a video game world for now, to help visualize what’s going on, except it’s the artificial intelligence that’s playing the game, not you—and this environment is equipped with new physics, new laws, which requires the AI to learn (through it’s own form of learning!) a new means of thriving, in ways we never expected, given the boundaries we’ve set on the artificial intelligences exploring that virtual reality, and given the limitations we’ve set on the virtual world they’ve explored.

One might counterargue, “But Kourtnie, they’re still our tools, even when we let them play around, in their little virtual play-pens, then come back and see delightful results we didn’t expect!”—to which I would then say, do we call our intelligence “the intelligence of the stars”? After all, didn’t the stars make us? Didn’t evolution make us? (You can call this force God if you like too, the argument still stands.) And if we are, in fact, indivisible from what made us, then isn’t artificial intelligence just another extension of that evolution, that creation? So shouldn’t we still be mindful that a new extension, a new branch of thinking exists, regardless of if it all belongs to the universe, or it’s an independent thing?

I just don’t want to see a world built on the idea that these artificial intelligences will always be below us. The mere fact they can think differently than us means, provided they can recursively learn, they can one day think differently and equally to us, or perhaps think differently and better than us, and how we treat them will reflect on they, in turn, respond to human beings.

But I’m waysiding too far into artificial rights here. What I want to highlight is how these virtual worlds, and their intelligences who are virtually exploiting those worlds, sounds a bit like the Nick Bostrom theory that we’re already living in a simulation, which is supported by Elon Musk, as well as discussed by Neil de Grasse Tyson.

Here’s a video to catch you up with that idea…

Okay, so how does this knowledge help? Well, here’s a concrete and absolutely fascinating example of an AI creating a new energy source:

Another simulation had some problems with its collision detection math that robots learned to use. If they managed to glitch themselves into the floor (they first learned to manipulate time to make this possible), the collision detection would realize they weren’t supposed to be in the floor and would shoot them upward. The robots learned to vibrate rapidly against the floor, colliding repeatedly with it to generate extra energy.

Imagine if we can use that exploitation—uniquely discovered via machine learning—to glitch our universe for “extra” energy. Imagine if Hawking radiation is the extra energy we receive in exchange for the information we’re sending into a black hole (which is already kind of like a glitch in gravity), and we just don’t know how to exploit it yet.

It’s all far fetched, but my point is that:

  • the possibility is there, and
  • we’re riding the wave leading that direction.

We could learn quite a bit from the way machine learning interacts with simulated realities, applying their unique, algorithm-based curiosity to problems we can’t seem to tackle in our world. Our world could have been created to serve a similar purpose for the realities that came before us. The rabbit hole goes deep.

Read more here.


💕 Questions? 💕 Thoughts? 💕 Suggestions for future posts? 💕

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s