Necessary-Meringue-1
Necessary-Meringue-1 t1_jegshy4 wrote
It's a pretty cool resource to get to look at an enterprise recommendation algorithm like that.
​
An aside, if you want a chuckle, search the term "Elon" in the repo:https://github.com/twitter/the-algorithm/search?q=elonhttps://github.com/twitter/the-algorithm/search?q=elon&type=issues
​
[edit 1]
since it's gone now, here's the back up provided by u/MjrK:https://i.imgur.com/jxqaByA.png
[edit 2] lol
https://github.com/twitter/the-algorithm/commit/ec83d01dcaebf369444d75ed04b3625a0a645eb9#diff-a58270fa1b8b745cd0bd311bed9cd24c983de80f96e7bd445e16e88b61e492b8L225
Necessary-Meringue-1 t1_je84su5 wrote
Reply to comment by slaweks in [D] FOMO on the rapid pace of LLMs by 00001746
Of course it has, but those are hard fought gains that are primarily results of WWI, WWII, and the early phases of the Cold War, not productivity gains.
There is no natural law that productivity gains get handed down. Just compare the years 1950-1970 in the US, where life for the average worker improved greatly, to the 1980s onward, since when we've been in a downward trend. There's steady productivity gains over all that.
Necessary-Meringue-1 t1_je2r12k wrote
Reply to comment by lqstuart in [D] FOMO on the rapid pace of LLMs by 00001746
Large scale automation has been happening for over 200 years (and beyond) and so far it hasn't translated to productivity gains being handed down to workers, so I'm not holding my breath.
Necessary-Meringue-1 t1_je2qurd wrote
Reply to comment by WarAndGeese in [D] FOMO on the rapid pace of LLMs by 00001746
>I think a lot of people have falsely bought the concept that their identity is their job, because there is such material incentive for that to be the case.
This is easily said and while true, this kind of sentiment seems to neglect the fact that we live in an economic system where you need a job to survive if you're not independently wealthy.
And for that question it does make a big difference whether you are a 200k/year ML engineer, or a $20/hr LLM prompter.
Necessary-Meringue-1 t1_je2gvh9 wrote
Reply to comment by moleeech in [D] Prediction time! Lets update those Bayesian priors! How long until human-level AGI? by LanchestersLaw
GPT-4 outperforms my aunt Carol on the bar-exam, so AGI is here!
Necessary-Meringue-1 t1_je2g0qi wrote
Reply to comment by [deleted] in [D] I've got a Job offer but I'm scared by [deleted]
Data engineering is already a lot of applied ML. Unless this is a research role, you don't necessarily need a whole lot of in-depth ML background knowledge.
They know you don't have an ML background, so they already factored that in.
You don't necessarily need to understand the maths behind things to apply them. Go play around with scikit-learn and numpy/pandas. They are pretty user friendly and give you a good baseline. Tensorflow is a bit rougher, that requires some understanding of how the model works internally. But, it's all things you can learn on the job.
It sounds like this could be a god opportunity for you to get into the field and see if it suits you or not.
Necessary-Meringue-1 t1_je2cy5j wrote
Reply to [D] I've got a Job offer but I'm scared by [deleted]
If you've passed the tests and the interviews, you're qualified. If you passed all the interviews and are somehow not qualified, that's on them and not on you.
If you want this job, take it.
Necessary-Meringue-1 t1_je2chw6 wrote
Reply to [D] Prediction time! Lets update those Bayesian priors! How long until human-level AGI? by LanchestersLaw
>Leave a comment on your pet definition for “human-level AGI” which is
>
>testable
>
>falsifiable
>
>robust
I can't even give you a definition like that for "general human intelligence".
Obviously your timeline will also vary depending on your definition, so this needs to be two different discussions.
LLMs are at least relatively "general", as opposed to earlier approaches that were restricted to a specific task. So within the domain of language, we made some insane progress in the past 7 years. Whether that constitutes "intelligence" really depends on what you think that is, which nobody agrees on.
Unless someone can define "human general intelligence" and "artificial general intelligence" for me, the discussion of timeline just detracts from the actual progress and near-term implications of recent developments. That's my 2 cents
Necessary-Meringue-1 t1_jdlezco wrote
Reply to comment by cyborgsnowflake in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
Yeah, I think you're on the money there. It's very hard for us to not anthropomorphize this behavior, especially because we literally used RLHF in order to make it more human-like.
Necessary-Meringue-1 t1_jdgwjo8 wrote
Reply to comment by Maleficent_Refuse_11 in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
That's true, but the outputs it produces are eerily persuasive. I'm firmly in the "LLMS are impressive but not AGI" camp. Still, the way it used Java to draw a picture in the style of Kandinsky blew me away. Obviously, a text2image model would be able to do that. But here they prompted GPT-4 to generate code that would generate a picture in a specific style. Which requires an extra level of abstraction and I can't really understand how that came about given that you would not expect a task like this in the training data. (page 12 for reference: https://arxiv.org/pdf/2303.12712.pdf)
I agree that a transformer really should not be considered "intelligent" or AGI, but LLMs really have an uncanny ability to generate output that looks "intelligent". Granted, that's what we built them to do, but still.
Necessary-Meringue-1 t1_jcmjqhm wrote
Reply to comment by Alimbiquated in Modern language models refute Chomsky’s approach to language [R] by No_Draft4778
I don't understand why it's so hard for people to acknowledge that LLMs deliver extremely impressive results, but that does not mean they have human-like intelligence of language understanding.
Necessary-Meringue-1 t1_jcm6j79 wrote
Reply to comment by Alimbiquated in Modern language models refute Chomsky’s approach to language [R] by No_Draft4778
>There is a general tendency to assume that if something seems intelligent, it must be like a human brain. It's like assuming that because it's fast, a car must have legs like a horse and eat oats.
Ironic, because that is literally what that article is doing.
Necessary-Meringue-1 t1_jcm5x7g wrote
Reply to comment by currentscurrents in Modern language models refute Chomsky’s approach to language [R] by No_Draft4778
just because it's "natural" does not mean it's unstructured or does not have any logic, can you be any more disingenuous than to rely some etymology-based semantics?
Like programmers invented structure
Necessary-Meringue-1 t1_jcm5mye wrote
Reply to comment by harharveryfunny in Modern language models refute Chomsky’s approach to language [R] by No_Draft4778
>These models are learning vastly more than language alone
A child growing up does too.
>These models are learning in an extraordinarily difficult way with *only* "predict next word" feedback and nothing else
Literally the point, that LLMs do not learn language like humans at all. Unless you're trying to say that you and I are pure Skinner-type behavioralist learners.
Necessary-Meringue-1 t1_jcm4o9d wrote
Reply to comment by harharveryfunny in Modern language models refute Chomsky’s approach to language [R] by No_Draft4778
>the Transformer is proof by demonstration that you don't need a language-specific architecture to learn language, and also that you can learn language via prediction feedback, which it highly likely how our brain does it too.
where to even start, how about this:
The fact that a transformer can appear to learn language on a non-specific architecture does not at all mean that humans work the same way.
​
Did you ingest billions of tokens of English growing up? How did you manage to have decent proficiency at the age of 6? Did you read the entire common crawl corpus by age 10?
​
This kind of argument is on paper stilts. LLMs are extremely impressive, but that does not mean they tell you much about how humans do language.
Necessary-Meringue-1 t1_jcm4bbu wrote
Reply to comment by sam__izdat in Modern language models refute Chomsky’s approach to language [R] by No_Draft4778
thanks for this linguistically and ML informed take-down!
Necessary-Meringue-1 t1_jccim91 wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
With the ever increasing cost of training LLMs, I feel like we're entering a new phase in AI. Away from open science, back to aggressively protecting IP and business interests.
Microsoft, via OpenAI are taking big steps into that direction. We'll see if others follow suit. I hope not, but I think they will.
Necessary-Meringue-1 t1_jegz6md wrote
Reply to comment by t98907 in [News] Twitter algorithm now open source by John-The-Bomb-2
I think we can safely go with Occam's Razor here. I would assume the "influential celebrity" is the "power_user" type, see: https://i.imgur.com/s6ntUil.png
​
Either way, I'm not surprised they are giving tweets from Musk their own type. Why wouldn't they. It probably became necessary to deal with his antics.