Mortal-Region
Mortal-Region t1_jebbn1g wrote
Reply to comment by QuartzPuffyStar in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Yeah, Yann LeCun's name was on there. He tweeted that he didn't sign it and disagrees with the premise.
EDIT: To be more specific, LeCun said he didn't sign it & disagrees with the premise in response to a tweet that was subsequently deleted.
Mortal-Region t1_jeb9glo wrote
Reply to There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
The fact that it was filled with fake signatures sure doesn't help.
Mortal-Region t1_jeaia9q wrote
Reply to GPT characters in games by YearZero
>We aren’t making clear progress in game character AI like other stuff, and we need a proper leap.
Problem is, it's such a huge leap going from decision trees to autonomous agents who can form their own objectives and plans with respect to other characters and the environment. It's pretty much the same problem that brains evolved for. LLMs aren't agents, so I think they'll end up as automated dialogue generators, with the overall storyline still being "on rails".
Mortal-Region t1_je8jw22 wrote
Reply to comment by Not-Banksy in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
Typically, humans provide the training data, then a program performs the actual training by looping through the data.
EDIT: One exception would be a game-playing AI that learns via self-play. Rather than humans supplying it training data in the form of games played by experts, the training data consists of the games the AI has played against itself.
Mortal-Region t1_je8h882 wrote
A neural network has very many weights, or numbers representing the strengths of the connections between the artificial neurons. Training is the process of setting the weights in an automated way. Typically, a network starts out with random weights. Then training data is presented to the network, and the weights are adjusted incrementally until the network learns to do what you want. (That's the learning part of machine learning.)
For example, to train a neural network to recognize cats, you present it with a series of pictures, one after the other, some with cats and some without. For each picture, you ask the network to decide whether the picture contains a cat. Initially, the network guesses randomly because the weights were initialized randomly. But every time the network gets it wrong, you adjust the weights slightly in the direction that would have given the right answer. (Same thing when it gets the answer right; you reinforce the weights that led to the correct answer.)
For larger neural networks, training requires an enormous amount of processing power, and the workload is distributed across multiple computers. But once the network is trained, it requires much less power to just use it (e.g., to recognize cats).
Mortal-Region t1_je79d9o wrote
Reply to comment by 94746382926 in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Musk is on the advisory board of the Future of Life Institute and is the primary donor.
Hmmm...
Mortal-Region t1_je6chkt wrote
Elon Musk signed the letter -- I assume because he needs time to catch up.
Mortal-Region t1_jdqa7pq wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Imagine you set up your time machine to take you to a random place and time on Earth. You press the button... and land just in time to witness the crucifixion of Christ. Yes, it might be random chance, but you'd be wise to suspect your mischievous lab assistant.
Mortal-Region t1_jd47m0x wrote
Reply to Let’s Make A List Of Every Good Movie/Show For The AI/Singularity Enthusiast by AnakinRagnarsson66
Don't forget 2001: A Space Odyssey, which features an AI with an ill-considered objective function. Beware, though, it's super-slow-burn -- the idea is that you're supposed to proactively insert yourself into the locales, maybe with a bit of assistance from a mild substance.
Also, The Thirteenth Floor, a simulated-reality story. It's got a lousy tomatometer, but I think it's better than The Matrix, which came out a couple of months earlier. I think a lot of reviewers didn't follow the plot. You've really got to pay attention (or watch it more than once).
Also in the category of simulation stories that got blown away by The Matrix is Existenz. It's directed by Cronenberg, so as you'd expect the tech has a creepy biological quality.
Mortal-Region t1_jajoks0 wrote
Reply to [D] Are Genetic Algorithms Dead? by TobusFire
I think genetic fuzzy trees are being considered for unmanned aerial combat.
Mortal-Region t1_jaeivnt wrote
Reply to Is the intelligence paradox resolvable? by Liberty2012
People seem to have the idea of a singular, global AGI stuck in their heads. Why wouldn't there be multiple instances? Millions, even? If one goes rogue, we've got the assistance of all the others to contain it.
Mortal-Region t1_j9wzepc wrote
Reply to We are in the early days of AI used as tool for biological design. It’s potential to design new proteins + DNA sequences from the building blocks of life is astonishing. by MichaelTen
Kurzgesagt put out a pretty good video about proteins recently. A nice, big-picture overview with great visualizations.
Mortal-Region t1_j9w2oaj wrote
Reply to comment by Cryptizard in Optimism in the Singularity in face of the Fermi-Paradox by [deleted]
But would it explain us being so early within the timeline of the first civilization?
Mortal-Region t1_j9w0tqc wrote
Reply to comment by Cryptizard in Optimism in the Singularity in face of the Fermi-Paradox by [deleted]
>The universe is only 14 billion years old, and it will have conditions for life to arise for another 10-100 trillion years.
Which begs an interesting question -- why so early? If the timeline is 7 meters long, why do we happen to find ourselves in the first millimeter? It gets even more acute if you allow for the possibility of digital civilizations. They'd survive the black hole era, so now the timeline is many times the diameter of the Milky Way. Yet here we are in the first millimeter. And that millimeter represents the entire time since the Big Bang. Considering that computers were invented less than a century ago, it all seems very fishy.
Mortal-Region t1_j9vjeyi wrote
- There has yet to be a singularity event in the Milky Way which would suggest that humans are the first technologically advanced civilization in the galaxy since it's formation, which is statistically very unlikely.
If the Great Filter is in our past, then it's not unlikely for us to be the first technological civilization. In fact, we should expect to be the first because it's unlikely for multiple species to make it through the filter simultaneously. I guess it's still weird that we made it through the filter in the first place. But it's the same kind of weirdness that arises from the fact that we happen to be intelligent humans rather than slugs. It's only weird if you're intelligent enough to think about such things.
What is weird, I think, is that we happen to exist right at the moment of the singularity. A galactic civilization would have a lifespan of billions or even trillions of years, yet here we are, witnessing the birth of AI, space-travel, etc -- the very tech that makes galactic civilization possible. The first electronic computer was invented less than 80 years ago!
Mortal-Region t1_j9ggdp9 wrote
Reply to comment by Zalameda in The dreamers of dreams by [deleted]
In any system that continuously accumulates new memories, there's the problem of how to balance newer memories (e.g., "There's a bear in the cave") with older ones (e.g., "Bears are faster than people"). If nothing is done about this problem, then newer memories will simply crowd out older ones, leading to catastrophic forgetting. You'll know that there's a bear in the cave, but not that bears are faster than people.
Nature's solution is to periodically take the system offline in order to perform a memory integration procedure. At night, recent memories are replayed; they are uploaded from the hippocampus (literally up) and "stirred" into the cortex, where longer-term memories and general knowledge-of-the-world are stored.
This replay procedure is performed during both REM sleep (when dreams occur) and non-REM sleep. It's thought that dreams occur during REM sleep because this is when the replay procedure is performed on the kinds of memories that pertain to immediate conscious awareness. (Non-REM sleep seems to deal more with motor memories.)
Mortal-Region t1_j9bjuue wrote
Not only is work progressing, but it could best be described as a mad rush. IBM, Intel, Microsoft, Google, Amazon and others are all working on the problem, approaching it from slightly different angles. Anyone of them might report a breakthrough at any moment.
Mortal-Region t1_j8lcmqf wrote
Reply to comment by phloydde in Speaking with the Dead by phloydde
Yeah, I think that's right. An agent should maintain a model of its environment in memory, and continually update the model according to its experiences. Then it should act to fill in incomplete portions of the model (explore unfamiliar terrain, research poorly understood topics, etc).
Mortal-Region t1_j8j9mii wrote
Reply to Speaking with the Dead by phloydde
Neural networks in general are basically gigantic, static formulas: get the input numbers, multiply & add a billion times, report the output numbers. What you're imagining is more like reinforcement learning, which is the kind of learning employed by intelligent agents. An intelligent agent acts within an environment, and actions that lead to good outcomes are reinforced. An agent balances exploration with exploitation; exploration means trying out new actions to see how well they work, while exploitation means performing actions that are known to work well.
Mortal-Region t1_j8j5roi wrote
Reply to comment by [deleted] in An AI recently piloted a Lockheed Martin aircraft for over 17 hours during a testing period in December. by Dalembert
F-22 is useless?
Mortal-Region t1_j8j28b0 wrote
Reply to comment by [deleted] in An AI recently piloted a Lockheed Martin aircraft for over 17 hours during a testing period in December. by Dalembert
The spec for the next-gen fighter specifies both manned and unmanned modes. So that's what Lockheed is up to here. In the next-gen there are also plans for unmanned drones to operate as wingmen.
Mortal-Region t1_j7m2k8o wrote
Reply to comment by peterflys in The Simulation Problem: from The Culture by Wroisu
Yeah, the argument assumes that the simulation is the "detail-on-demand" kind, meaning that when the simulated people run their own simulation, the real computer in base reality has to provide a tremendous amount of new detail -- roughly the same amount as is allocated for their world (assuming they run the same kind of simulation as the one they occupy).
So, for example, if the sims have just 10 simulations running simultaneously, the simulation they occupy will consume 11 times more computational resources than before (including memory). Even if the computer in base reality grows to the size of an entire galaxy, just one more level down means now you need ten galaxies. All this just to keep your single sub-simulation-incorporating simulation running.
I think it's more likely that the simulators will nip the problem in the bud by halting the sim just before sub-simulations become possible, thus also preventing sub-sub-simulations and sub-sub-sub simulations and so on. After all, the thing they're probably simulating is their own historical singularity, so once sub-simulations become possible, the simulation has pretty much run its course.
Mortal-Region t1_j7lv3t9 wrote
Reply to comment by iNstein in The Simulation Problem: from The Culture by Wroisu
If the simulators are digital beings themselves, then the occupants of the sim could be preserved just by moving them.
Mortal-Region t1_j7gvfkz wrote
Reply to The Simulation Problem: from The Culture by Wroisu
Furthermore... if it's a simulation of a technological society, it might have to be shut down eventually, because if the simulated people advance far enough to create their own simulations -- perhaps millions of them -- then the strain on the computer would be too great. The simulation would slow to a crawl, and more urgently, it'd run out of memory.
Mortal-Region t1_jebev1o wrote
Reply to comment by QuartzPuffyStar in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Yeah, bonehead move -- seems they just set up a simple web form.
But what's suspicious to me is that the letter specifically calls for a pause on "AI systems more powerful than GPT-4." Not self-driving AI or stable diffusion or anything else. GPT-4 is the culprit that needs to be paused for six months.
Then, of course, there's the problem that there's no way to pause China, Russia, or any other authoritarian regime. I've never heard a real solution to that. It's more like the AI-head-start-for-authoritarian-regimes letter.