Left-Shopping-9839

Left-Shopping-9839 t1_j5rgb36 wrote

Ok not really the right comparison to religion. But there is definitely a dogma infecting this sub.

  1. Conciousness will emerge from LLMs.

  2. Massive loss of jobs and unemployment is imminent.

Neither of these claims has any credible evidence to support, yet they are vigorously defended whenever any skepticism is voiced. So in that way reminds me of religion and certainly is not grounded in 'science'.

I love the incredible progress we've seen in the area of ML and AI. But the idea that Conciousness will simply emerge from a large enough neural network is still a hypothesis. It is a hypothesis worth chasing for sure, but not a certainty. ChatGPT being able to surprise the user with a 'thoughtful' response is not evidence imo.

Also the CEO of some AI venture claiming 'you won't believe what's coming next' should also be taken with salt. I mean that's their job to promote their company.

I like evidence. And I'm finding very little of that here. This is why I left. Goodbye.

−4

Left-Shopping-9839 t1_j32f2t0 wrote

I use copilot for everything and I love it. There are times when it spits out code that looks exactly like what I'm thinking and does it better than I could. In those moments I could easily claim the singularity has arrived. The next time it creates something that uses some library of functions that I don't even have imported and sometimes doesn't even exist LOL. So even if they work out the simple stuff, it's still a long way from being anything other than awesome code completion.

2

Left-Shopping-9839 t1_j32drm7 wrote

Agree 100%. In my company (and I think most others are this way) your code has to pass tests. This is what is missing in the copilot model. They would need to track feedback all the way to production and applied fixes to know whether the code suggestion is good. That is the sort of learning loop which needs to be in place to even start to claim intelligence. Hopefully they are working on this. I use copilot and honestly I love it. It's not going to be replacing humans in its current iteration yet the hype train keeps rolling. LLMs are mockingbirds. They are impressively good, but still mockingbirds. DALL-E imo....is shit.

1

Left-Shopping-9839 t1_j3243r8 wrote

If you actually do real software development you would know this isn't possible. By 'you' I mean anyone. Not specifically you. I have spent hours tracking down strange errors back to the fact that I didn't check the copilot code closely enough. It does a great job providing code that is 90% correct, but often slips in undeclared variables etc. This is not 'intelligence'. It's just an awesome code completion tool which makes a lot of mistakes but still saves a lot of typing.

2