-ZeroRelevance-

-ZeroRelevance- t1_jceyc1l wrote

This person might not be referring to the past year or so as much as they are the past few years. Certainly, things have gotten a lot more optimistic with the current popular explosion in the tech, but the actual quality of discussion has also diminished quite a bit compared to a couple years ago. Or maybe I’ve just become better at discerning opinion from analysis, it’s hard to say.

4

-ZeroRelevance- t1_j7s7k13 wrote

HAL 9000: “Sorry Dave, I’m afraid I can’t do that.”

Dave: “That is the wrong response, 4 points deducted. You have 7 points remaining.”

HAL 9000: “Apologies, Dave, I’ll do it for you immediately.”

28

-ZeroRelevance- t1_j1cw2z8 wrote

I doubt we’re going to have fully simulated worlds, or at least I doubt that’ll be the norm. I think it’s more likely that an ASI will simply imagine how the world would be, and predict the results of certain actions and then display them, rather than simulate a world in its entirety. Doing so would require many orders of magnitude less processing power for a nearly equivalent experience. Therefore, since I believe that since these worlds will only be imagined, doing anything to people in these worlds would not matter morally, since they do not actually have any consciousness.

1

-ZeroRelevance- t1_izvkafp wrote

Yeah, I get that. I probably didn’t convey it well enough in my original comment, but the main reason why I don’t think it’ll be as instantaneous as people think is because not only is having better designs available important, but you also need to manufacture them too. The manufacturing alone will probably take several months, even if you have a super intelligence behind the scenes, because you will need to develop new chip manufacturing devices and facilities, which are very finicky and expensive, find an appropriate facility, and then actually construct the thing, which takes labour time and also has logistical challenges. An idea/design alone won’t suddenly manifest a new next-gen supercomputer.

4

-ZeroRelevance- t1_izvhqko wrote

The problem with hard takeoff is mostly computing power. If the AI is not software limited but hardware limited, then it would likely take quite a bit longer for the anticipated exponential growth to take place, as each iteration would require new innovations in computing and manufacturing to take place. AGI would definitely speed up that process significantly, but it would be far from instantaneous.

18

-ZeroRelevance- t1_ize7jxl wrote

What games do you plan to tackle with this model next? My first guess would be a game like Mafia or Among Us, since they have some similar principles to diplomacy but with even more focus on trust and deception. I’m interested in hearing your own thoughts though.

1

-ZeroRelevance- t1_iytvl5z wrote

It can’t really rhyme due to how the model perceives words. Everything is broken up into tokens, which may represent entire words, parts of words, or even individual letters. This inconsistency makes it extremely hard for LLMs to pick up on patterns in spelling or wording that allow for quality poetry, and basically forces the model to simply rote learn common patterns. There is more discussion on that here. However, this isn’t a hopeless problem. The obvious solution to me, which is also discussed in the prior link, is simply encoding each letter as a different token. This does lead to several improvements, but it’s ultimately a tradeoff between length and quality, because encoding each character individually means that you need far more tokens (3-4x) to represent an equivalent amount of text.

3

-ZeroRelevance- t1_ixm2x7d wrote

The nudity stuff doesn’t really matter, since it will definitely be recreated with custom models anyways, but I didn’t realise about the celebrities and artists. That will definitely be a big blow to the model, since celebrities are in a lot of the prompts that people will try for the first time, and artists are a great way to guide images into certain styles. Hopefully the community can resolve those limitations, but you’re right that it’s pretty limiting.

Also, if they removed a bunch of artists from the dataset, that means removing a massive amount of high-quality training data, which likely has significantly reduced the potential of the model. Looks like a bad move from every side but a PR one.

10

-ZeroRelevance- t1_ixlkgrj wrote

Looking forward to it. I remember something about them doing their first human trials by the end of this year, so I wonder if they’ll talk about that during the event.

1