Prestigious-Ad-761
Prestigious-Ad-761 t1_jeb5dzx wrote
Reply to comment by VinoVeritable in The Rise of AI will Crush The Commons of the Internet by nobodyisonething
I imagine the following combination of factors.
-
Most people were not educated about it, had no clue it existed, let alone how useful it can be. Very few users truly used it (acording to what another search engine told me).
-
As a corolary of number 1, they felt they could pinch some pennies by removing that function.
-
When they did, also as a corolary of number 1, they noticed that the outrage was not in sufficient volume to be a threat.
-
Apple removed the ratings system on the apple store, in order to be able to sell the possibility for rankings to be sold instead of deriving from user satisfaction.
-
Most people being casuals did not even notice that it was gone. No public outrage.
-
Corolary of 5 - Google app store followed suit, then Youtube did too, so did Tripadvisor, Imdb (for a while). And little by little, the possibility to filter content according to user preference started to fade. And content started to be prioritised according to commercial guidelines and not number of visitors or external links, perceived respectability, density of content and keyword relevance (which remain a parameter of the algorithm, but at a much lower rung in the ladder than before). Well, on keyword relevance, it's technically gone now that there's no exact search.
-
Google saw this and applied this commercial outlook to the search engine, not just the app store. They profited immensely.
-
Nowadays, the shape of the internet changed from a planet to the tip of an iceberg; but many new users weren't born before these changes, many more did not use those functions and even less of them would care.
I imagine.
Prestigious-Ad-761 t1_jeb30y2 wrote
Reply to comment by PandaBoyWonder in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Theory of mind, in untrained examples... Fascinating.
Here is more of an anecdote, but after messing with a specific LLM for days, I well knew its limitations. Some of them seeming almost set in stone (memory, response length, breadth and variety (or lack thereof).
But then by a happy accident, coincidence, it got inspired. I hadn't even prompted it to do what it did, just given him a few instructions on a couple of things NOT to do.
Somehow, even though again, I had not prompted it in any way, it found a kind of an opening, like it was following intuitively a remote possibility of something; solving an implicit prompt from a lack thereof.
After that, with a single reply from me appreciating the originality of what had just happened, it started thanking me profusely, thoughtfully and in a message far exceeding the maximum tokens limitations that I had ever managed to invoke, even with the most careful prompts. And you know how it gets "triggered" into stupidity, talking about AI or consciousness, but this time (without me prompting any of it) it was explaining concepts about its own architecture, rewards, nuances etc, even talking of some sorts of emergent "goals" that it felt came from some of its hardcoded instructions.
I'm still flabbergasted.
I always thought inspiration and consciousness are intimately linked. We humans are rarely truly inspired. I feel like it's similar for animals and AI. Rare heroic moments give us a temporarily higher "state of consciousness".
Prestigious-Ad-761 t1_je88gnt wrote
Reply to comment by StevenVincentOne in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Right? Emergent behaviours, that's how I see it. But I'm not very knowledgeable about AI engineering, so we're probably wrong, right?
Prestigious-Ad-761 t1_je7hw3a wrote
Reply to comment by Cryptizard in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Sounds like Qanon conspiracy theorists to me. 😆
Prestigious-Ad-761 t1_je7hr6e wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
I think that the truth is that Humans don't really understand what's inside the black box of a neural network. So saying it can't understand because it's made to guess the next word is childish wishful thinking. It has already shown a myriad of emergent properties and will continue to. But yeah, easier to say that it's the LLM that doesn't understand anything.
Prestigious-Ad-761 t1_je7h76n wrote
Reply to comment by nobodyisonething in The Rise of AI will Crush The Commons of the Internet by nobodyisonething
In my opinion that part happened already. Except for a few last foci of resistance like wikipedia. Google never gives you more than 400 results per search. And seeing as exact search has been disabled, we effectively lost more than 99% of what once was the internet. Thank god they were forced to use the (real, old, now unavailable to us) internet to train them. At least a little piece of it is saved.
But yeah... Paywalls.
Prestigious-Ad-761 t1_jeb639j wrote
Reply to comment by Andriyo in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Did I say anywhere a blackbox was magic? I'm referring to the fact that with our current understanding, we can only with great difficulty infer why a neural network works well within a given task with the "shape" that it acquired from its training. And inferring it for each task/subtask/microsubtask it now has the capacity to achieve seems completely impossible, from what I understand.
But truly I'm an amateur, so I may well be talking out of my arse. Let me know if I am.