Nano-Brain

Nano-Brain t1_j9r1gx0 wrote

I dont think that's true. I think even the dumbest humans have dreams that generate new ideas, however abysmal they may be.

But even if you're correct, unless the AI can extrapolate the data we give it into brand new hallucinations that dream up things we've never thought of, then it will never be different or smarter than us. This is because it will always be beholden to the data that we manually feed it.

1

Nano-Brain t1_j9qv94j wrote

But to be AGI the software must be able to "dream" up new things, not just recognize patterns because of big data. It must be able to produce its own data by coming to conclusions without any, or very little data initially given to it.

So, it could take longer. However, all it really takes is that "Aha!" moment from a computer scientist that could very quickly usher in the very first AGI models. After all, given the amount of time we humans have been trying to figure this out, one can assume that this major technological shift is just around the corner.

I assume the first models won't be the last models. So, there will still be more time required after the first model is created.

But its this first model that inevitably will usher in the singularity, because humans will not be the ones doing the engineering after this point. It will be the software modifying or upgrading itself.... fast and better with each iteration.

1

Nano-Brain t1_j9qsxa5 wrote

My issue with censoring posts is that it only leads to issues. Someone on the backend has to decide what can be posted. In the beginning it seems ok. But then it becomes hard to determine the threshold for which posts to cutoff.

2