Mortal-Region

Mortal-Region t1_jebev1o wrote

Yeah, bonehead move -- seems they just set up a simple web form.

But what's suspicious to me is that the letter specifically calls for a pause on "AI systems more powerful than GPT-4." Not self-driving AI or stable diffusion or anything else. GPT-4 is the culprit that needs to be paused for six months.

Then, of course, there's the problem that there's no way to pause China, Russia, or any other authoritarian regime. I've never heard a real solution to that. It's more like the AI-head-start-for-authoritarian-regimes letter.

27

Mortal-Region t1_jeaia9q wrote

>We aren’t making clear progress in game character AI like other stuff, and we need a proper leap.

Problem is, it's such a huge leap going from decision trees to autonomous agents who can form their own objectives and plans with respect to other characters and the environment. It's pretty much the same problem that brains evolved for. LLMs aren't agents, so I think they'll end up as automated dialogue generators, with the overall storyline still being "on rails".

6

Mortal-Region t1_je8jw22 wrote

Typically, humans provide the training data, then a program performs the actual training by looping through the data.

EDIT: One exception would be a game-playing AI that learns via self-play. Rather than humans supplying it training data in the form of games played by experts, the training data consists of the games the AI has played against itself.

9

Mortal-Region t1_je8h882 wrote

A neural network has very many weights, or numbers representing the strengths of the connections between the artificial neurons. Training is the process of setting the weights in an automated way. Typically, a network starts out with random weights. Then training data is presented to the network, and the weights are adjusted incrementally until the network learns to do what you want. (That's the learning part of machine learning.)

For example, to train a neural network to recognize cats, you present it with a series of pictures, one after the other, some with cats and some without. For each picture, you ask the network to decide whether the picture contains a cat. Initially, the network guesses randomly because the weights were initialized randomly. But every time the network gets it wrong, you adjust the weights slightly in the direction that would have given the right answer. (Same thing when it gets the answer right; you reinforce the weights that led to the correct answer.)

For larger neural networks, training requires an enormous amount of processing power, and the workload is distributed across multiple computers. But once the network is trained, it requires much less power to just use it (e.g., to recognize cats).

23

Mortal-Region t1_jd47m0x wrote

Don't forget 2001: A Space Odyssey, which features an AI with an ill-considered objective function. Beware, though, it's super-slow-burn -- the idea is that you're supposed to proactively insert yourself into the locales, maybe with a bit of assistance from a mild substance.

Also, The Thirteenth Floor, a simulated-reality story. It's got a lousy tomatometer, but I think it's better than The Matrix, which came out a couple of months earlier. I think a lot of reviewers didn't follow the plot. You've really got to pay attention (or watch it more than once).

Also in the category of simulation stories that got blown away by The Matrix is Existenz. It's directed by Cronenberg, so as you'd expect the tech has a creepy biological quality.

12

Mortal-Region t1_j9w0tqc wrote

>The universe is only 14 billion years old, and it will have conditions for life to arise for another 10-100 trillion years.

Which begs an interesting question -- why so early? If the timeline is 7 meters long, why do we happen to find ourselves in the first millimeter? It gets even more acute if you allow for the possibility of digital civilizations. They'd survive the black hole era, so now the timeline is many times the diameter of the Milky Way. Yet here we are in the first millimeter. And that millimeter represents the entire time since the Big Bang. Considering that computers were invented less than a century ago, it all seems very fishy.

1

Mortal-Region t1_j9vjeyi wrote

  1. There has yet to be a singularity event in the Milky Way which would suggest that humans are the first technologically advanced civilization in the galaxy since it's formation, which is statistically very unlikely.

If the Great Filter is in our past, then it's not unlikely for us to be the first technological civilization. In fact, we should expect to be the first because it's unlikely for multiple species to make it through the filter simultaneously. I guess it's still weird that we made it through the filter in the first place. But it's the same kind of weirdness that arises from the fact that we happen to be intelligent humans rather than slugs. It's only weird if you're intelligent enough to think about such things.

What is weird, I think, is that we happen to exist right at the moment of the singularity. A galactic civilization would have a lifespan of billions or even trillions of years, yet here we are, witnessing the birth of AI, space-travel, etc -- the very tech that makes galactic civilization possible. The first electronic computer was invented less than 80 years ago!

4

Mortal-Region t1_j9ggdp9 wrote

Reply to comment by Zalameda in The dreamers of dreams by [deleted]

In any system that continuously accumulates new memories, there's the problem of how to balance newer memories (e.g., "There's a bear in the cave") with older ones (e.g., "Bears are faster than people"). If nothing is done about this problem, then newer memories will simply crowd out older ones, leading to catastrophic forgetting. You'll know that there's a bear in the cave, but not that bears are faster than people.

Nature's solution is to periodically take the system offline in order to perform a memory integration procedure. At night, recent memories are replayed; they are uploaded from the hippocampus (literally up) and "stirred" into the cortex, where longer-term memories and general knowledge-of-the-world are stored.

This replay procedure is performed during both REM sleep (when dreams occur) and non-REM sleep. It's thought that dreams occur during REM sleep because this is when the replay procedure is performed on the kinds of memories that pertain to immediate conscious awareness. (Non-REM sleep seems to deal more with motor memories.)

2

Mortal-Region t1_j9bjuue wrote

Not only is work progressing, but it could best be described as a mad rush. IBM, Intel, Microsoft, Google, Amazon and others are all working on the problem, approaching it from slightly different angles. Anyone of them might report a breakthrough at any moment.

125

Mortal-Region t1_j8lcmqf wrote

Reply to comment by phloydde in Speaking with the Dead by phloydde

Yeah, I think that's right. An agent should maintain a model of its environment in memory, and continually update the model according to its experiences. Then it should act to fill in incomplete portions of the model (explore unfamiliar terrain, research poorly understood topics, etc).

1

Mortal-Region t1_j8j9mii wrote

Neural networks in general are basically gigantic, static formulas: get the input numbers, multiply & add a billion times, report the output numbers. What you're imagining is more like reinforcement learning, which is the kind of learning employed by intelligent agents. An intelligent agent acts within an environment, and actions that lead to good outcomes are reinforced. An agent balances exploration with exploitation; exploration means trying out new actions to see how well they work, while exploitation means performing actions that are known to work well.

3

Mortal-Region t1_j7m2k8o wrote

Yeah, the argument assumes that the simulation is the "detail-on-demand" kind, meaning that when the simulated people run their own simulation, the real computer in base reality has to provide a tremendous amount of new detail -- roughly the same amount as is allocated for their world (assuming they run the same kind of simulation as the one they occupy).

So, for example, if the sims have just 10 simulations running simultaneously, the simulation they occupy will consume 11 times more computational resources than before (including memory). Even if the computer in base reality grows to the size of an entire galaxy, just one more level down means now you need ten galaxies. All this just to keep your single sub-simulation-incorporating simulation running.

I think it's more likely that the simulators will nip the problem in the bud by halting the sim just before sub-simulations become possible, thus also preventing sub-sub-simulations and sub-sub-sub simulations and so on. After all, the thing they're probably simulating is their own historical singularity, so once sub-simulations become possible, the simulation has pretty much run its course.

2

Mortal-Region t1_j7gvfkz wrote

Furthermore... if it's a simulation of a technological society, it might have to be shut down eventually, because if the simulated people advance far enough to create their own simulations -- perhaps millions of them -- then the strain on the computer would be too great. The simulation would slow to a crawl, and more urgently, it'd run out of memory.

2