emerging-tech-reader
emerging-tech-reader t1_j7kzd3i wrote
Reply to comment by WokeAssBaller in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
> https://arxiv.org/pdf/1706.03762.pdf the paper that made all this possible.
That's reaching IMHO. The original transformer was only around a few million parameters in size. It's not even in the realm of the level of ChatGPT.
You may as well say that MIT invented it as Googles paper is based on methods created by them.
emerging-tech-reader t1_j7ksup6 wrote
Reply to comment by WokeAssBaller in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
> OpenAI is built on google research
To my knowledge that is not remotely true. Can you cite where you got that claim?
OpenAI does take funding and share research with a number of AI related companies. Don't know if Google is in that list.
emerging-tech-reader t1_j7kptn9 wrote
Reply to comment by WokeAssBaller in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
I got a demo of some of the stuff happening.
The one that is most impressive is they have GPT watching a meeting taking minutes and even crafts action items, emails, etc all ready for you when you leave the meeting.
It will also offer suggestions to follow up on in the meetings as they are on going.
Google have become the altavista.
emerging-tech-reader t1_j7kh681 wrote
Reply to comment by st8ic in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
> given the volume of false information that chatGPT generates
It actually generates mostly accurate information. The longer you have the conversation the more it starts to hallucinate, but it is considerably more accurate than most people.
emerging-tech-reader t1_ix7hpo7 wrote
Reply to comment by Stock-Violinist6297 in [R][D] Reading ML Papers - Workflow/Advice by EndlessRevision
These are good points.
OP, your workflow sounds very similar to the one suggested by Barbara Oakley with a dash of Feynman technique.
I would also recommend trying Papers with code as sometimes looking at the code makes more sense.
emerging-tech-reader t1_iswmsm0 wrote
Reply to comment by suricatasuricata in [D] How frustrating are the ML interviews these days!!! TOP 3% interview joke by Mogady
True. I skimmed over that. Cancels out my assumption.
emerging-tech-reader t1_iswhsrm wrote
Reply to comment by suricatasuricata in [D] How frustrating are the ML interviews these days!!! TOP 3% interview joke by Mogady
> To me this is a sign that they are overfitting to a specific type of candidate.
In banking/finance jobs it is quite common to not even have access beyond what is supplied internally as documentation.
This is what that suggests to me.
emerging-tech-reader t1_iswgvv3 wrote
Devils advocate here as I have given numerous interviews where we have had to give technical questions.
None of the interviews I gave was a pass/fail on the technical question.
The purpose is to gauge the level of skills the applicant has.
Even experts will look at stack overflow, but it is how the applicant approaches the question tells you more than if they are right or wrong.
Someone who has been working with a language/library for a long time building models would know the most common methods/syntax.
So if an applicant claimed they were an expert at pandas, then not knowing those commands would work against them.
The fact they gave applicants access to the documentation means they were taking people of different skill levels.
I would also recommend to be wary about talking about grabbing code from Stack Overflow in an interview. Some job roles require compliance on code source. Saying you pull stuff from SO could disqualify you immediately.
...
My point is, just because there is a technical question don't assume it's a BS interview, and that you will fail just because you don't know the answer straightaway.
emerging-tech-reader t1_iqubs5s wrote
Reply to [D] Types of Machine Learning Papers by Lost-Parfait568
I saw an NLP-ML one a few years back that had a conclusion of "This would never work" and they really tried. (forgot what they were trying to do)
emerging-tech-reader t1_j7p3gn4 wrote
Reply to comment by WokeAssBaller in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
> Please without the transformer we would never be able to scale,
Without back propagation we wouldn't have transformers. 🤷♂️