Left-Shopping-9839
Left-Shopping-9839 t1_j5r2qi7 wrote
Reply to comment by pre-DrChad in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
I've considered leaving this sub because it sounds like a religious cult lately....[edit] I'm just going to leave now.
Left-Shopping-9839 t1_j33pp82 wrote
Reply to comment by ebolathrowawayy in 2022 was the year AGI arrived (Just don't call it that) by sideways
Thanks for the tip. I'll try that.
Left-Shopping-9839 t1_j32f2t0 wrote
Reply to comment by footurist in 2022 was the year AGI arrived (Just don't call it that) by sideways
I use copilot for everything and I love it. There are times when it spits out code that looks exactly like what I'm thinking and does it better than I could. In those moments I could easily claim the singularity has arrived. The next time it creates something that uses some library of functions that I don't even have imported and sometimes doesn't even exist LOL. So even if they work out the simple stuff, it's still a long way from being anything other than awesome code completion.
Left-Shopping-9839 t1_j32drm7 wrote
Reply to comment by [deleted] in 2022 was the year AGI arrived (Just don't call it that) by sideways
Agree 100%. In my company (and I think most others are this way) your code has to pass tests. This is what is missing in the copilot model. They would need to track feedback all the way to production and applied fixes to know whether the code suggestion is good. That is the sort of learning loop which needs to be in place to even start to claim intelligence. Hopefully they are working on this. I use copilot and honestly I love it. It's not going to be replacing humans in its current iteration yet the hype train keeps rolling. LLMs are mockingbirds. They are impressively good, but still mockingbirds. DALL-E imo....is shit.
Left-Shopping-9839 t1_j3243r8 wrote
Reply to comment by [deleted] in 2022 was the year AGI arrived (Just don't call it that) by sideways
If you actually do real software development you would know this isn't possible. By 'you' I mean anyone. Not specifically you. I have spent hours tracking down strange errors back to the fact that I didn't check the copilot code closely enough. It does a great job providing code that is 90% correct, but often slips in undeclared variables etc. This is not 'intelligence'. It's just an awesome code completion tool which makes a lot of mistakes but still saves a lot of typing.
Left-Shopping-9839 t1_j31u10m wrote
Reply to comment by bernard_cernea in 2022 was the year AGI arrived (Just don't call it that) by sideways
Oh you sound like someone who actually used the tools. All the hype is people who read about them. Just try putting code written by copilot straight into production!
Left-Shopping-9839 t1_j1hw10f wrote
Reply to Hype bubble by fortunum
Well said. Try blindly committing code written by copilot. That’s the easiest way for AI to take your job!
Left-Shopping-9839 t1_iy1tfxp wrote
Sad the politicians would be last. I'd love to see representatives replaced by an AI that simply figures out what most people want in the district and the vote is just a calculation.
Left-Shopping-9839 t1_j5rgb36 wrote
Reply to comment by Eleganos in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
Ok not really the right comparison to religion. But there is definitely a dogma infecting this sub.
Conciousness will emerge from LLMs.
Massive loss of jobs and unemployment is imminent.
Neither of these claims has any credible evidence to support, yet they are vigorously defended whenever any skepticism is voiced. So in that way reminds me of religion and certainly is not grounded in 'science'.
I love the incredible progress we've seen in the area of ML and AI. But the idea that Conciousness will simply emerge from a large enough neural network is still a hypothesis. It is a hypothesis worth chasing for sure, but not a certainty. ChatGPT being able to surprise the user with a 'thoughtful' response is not evidence imo.
Also the CEO of some AI venture claiming 'you won't believe what's coming next' should also be taken with salt. I mean that's their job to promote their company.
I like evidence. And I'm finding very little of that here. This is why I left. Goodbye.