Necessary-Meringue-1

Necessary-Meringue-1 t1_jegshy4 wrote

It's a pretty cool resource to get to look at an enterprise recommendation algorithm like that.

​

An aside, if you want a chuckle, search the term "Elon" in the repo:https://github.com/twitter/the-algorithm/search?q=elonhttps://github.com/twitter/the-algorithm/search?q=elon&type=issues

​

[edit 1]
since it's gone now, here's the back up provided by u/MjrK:https://i.imgur.com/jxqaByA.png
[edit 2] lol
https://github.com/twitter/the-algorithm/commit/ec83d01dcaebf369444d75ed04b3625a0a645eb9#diff-a58270fa1b8b745cd0bd311bed9cd24c983de80f96e7bd445e16e88b61e492b8L225

100

Necessary-Meringue-1 t1_je84su5 wrote

Of course it has, but those are hard fought gains that are primarily results of WWI, WWII, and the early phases of the Cold War, not productivity gains.

There is no natural law that productivity gains get handed down. Just compare the years 1950-1970 in the US, where life for the average worker improved greatly, to the 1980s onward, since when we've been in a downward trend. There's steady productivity gains over all that.

2

Necessary-Meringue-1 t1_je2qurd wrote

>I think a lot of people have falsely bought the concept that their identity is their job, because there is such material incentive for that to be the case.

This is easily said and while true, this kind of sentiment seems to neglect the fact that we live in an economic system where you need a job to survive if you're not independently wealthy.

And for that question it does make a big difference whether you are a 200k/year ML engineer, or a $20/hr LLM prompter.

3

Necessary-Meringue-1 t1_je2g0qi wrote

Data engineering is already a lot of applied ML. Unless this is a research role, you don't necessarily need a whole lot of in-depth ML background knowledge.

They know you don't have an ML background, so they already factored that in.

You don't necessarily need to understand the maths behind things to apply them. Go play around with scikit-learn and numpy/pandas. They are pretty user friendly and give you a good baseline. Tensorflow is a bit rougher, that requires some understanding of how the model works internally. But, it's all things you can learn on the job.

It sounds like this could be a god opportunity for you to get into the field and see if it suits you or not.

3

Necessary-Meringue-1 t1_je2chw6 wrote

>Leave a comment on your pet definition for “human-level AGI” which is
>
>testable
>
>falsifiable
>
>robust

I can't even give you a definition like that for "general human intelligence".

Obviously your timeline will also vary depending on your definition, so this needs to be two different discussions.

LLMs are at least relatively "general", as opposed to earlier approaches that were restricted to a specific task. So within the domain of language, we made some insane progress in the past 7 years. Whether that constitutes "intelligence" really depends on what you think that is, which nobody agrees on.

Unless someone can define "human general intelligence" and "artificial general intelligence" for me, the discussion of timeline just detracts from the actual progress and near-term implications of recent developments. That's my 2 cents

13

Necessary-Meringue-1 t1_jdgwjo8 wrote

That's true, but the outputs it produces are eerily persuasive. I'm firmly in the "LLMS are impressive but not AGI" camp. Still, the way it used Java to draw a picture in the style of Kandinsky blew me away. Obviously, a text2image model would be able to do that. But here they prompted GPT-4 to generate code that would generate a picture in a specific style. Which requires an extra level of abstraction and I can't really understand how that came about given that you would not expect a task like this in the training data. (page 12 for reference: https://arxiv.org/pdf/2303.12712.pdf)

I agree that a transformer really should not be considered "intelligent" or AGI, but LLMs really have an uncanny ability to generate output that looks "intelligent". Granted, that's what we built them to do, but still.

32

Necessary-Meringue-1 t1_jcm5mye wrote

>These models are learning vastly more than language alone

A child growing up does too.

>These models are learning in an extraordinarily difficult way with *only* "predict next word" feedback and nothing else

Literally the point, that LLMs do not learn language like humans at all. Unless you're trying to say that you and I are pure Skinner-type behavioralist learners.

1

Necessary-Meringue-1 t1_jcm4o9d wrote

>the Transformer is proof by demonstration that you don't need a language-specific architecture to learn language, and also that you can learn language via prediction feedback, which it highly likely how our brain does it too.

where to even start, how about this:

The fact that a transformer can appear to learn language on a non-specific architecture does not at all mean that humans work the same way.

​

Did you ingest billions of tokens of English growing up? How did you manage to have decent proficiency at the age of 6? Did you read the entire common crawl corpus by age 10?

​

This kind of argument is on paper stilts. LLMs are extremely impressive, but that does not mean they tell you much about how humans do language.

1

Necessary-Meringue-1 t1_jccim91 wrote

With the ever increasing cost of training LLMs, I feel like we're entering a new phase in AI. Away from open science, back to aggressively protecting IP and business interests.

Microsoft, via OpenAI are taking big steps into that direction. We'll see if others follow suit. I hope not, but I think they will.

8