Glitched-Lies

Glitched-Lies t1_j9u2xyu wrote

The scientists who have made contributions to the problem you say; the problem of incomputablity and simulation versus authentic consciousness, like Roger Penrose have a not 100% convincing science of Orchestrated Objective Reduction, accordingly have been pointed out that it's a fallacy to say at what point something is incomputable versus not. However considering quantum mechanics's counter intuitiveness with emperical experiment, you might be able to reconcile this fallacy. And if anything about science means anything then this is the first approach and nearest neighbor to what would be objective truth on the matter.

What someone needs is a bridge of this problem with epistemology, not the ontologies of simulation versus authentic, and that means deep work on how there is an approach to the science of consciousness. And how could any come to a conclusion over this? I don't think within the next 50 years. However I think like most scientists think, there is a definitive answer which can be ruled "certain".

1

Glitched-Lies t1_j4yjxc9 wrote

Then you must be in for an even shock to find that none of these "companies" are working on conscious AI. In fact nearly none of them are. And AI doesn't just "become" conscious.

The only conscious AI to exist would be a spiking neural network of brain emulation and cognition. None of these idiots do that. And some don't even know what that is.

2

Glitched-Lies t1_j3r89e2 wrote

I just bought one from Brainchip. They seem pretty good. I asked them some of their use cases, they have some videos on their YouTube on classification tasks of images of beer bottles, but they seem to be the same kind of tasks you can do on a regular GPU.

Brainchip PCI chip is interesting because you can code for them like regularly, and then send the built neural network to the chip and convert it from a CNN into a SNN, but there doesn't seem to be a great reason to use it this way. It seems like the main use case would be to run a native SNN on it. NPUs don't seem to scale the way GPUs do though either.

2

Glitched-Lies t1_ivg9geg wrote

It's actually by fact of first order logic of phenomenal, actually. A straight line of reasoning determines it and upon evidence gathering of both empirical differences and not emprerical points. It's like 1+1=2, 1+1+1=3, 1+1+1+1=4 ... In a series ex. Because confusion upon any belief reasoning, as that's not truly belief. Exploring the notion of this being wrong is a waste of time for the explanation above.

1

Glitched-Lies t1_ivdi4ub wrote

It would be "settling" ethics at an incomplete place. As by the very nature of what it would mean by a computer simulating a consciousness and relative wording about computations or the math. But by very nature the differences are that itself. An identical system wouldn't be a computer. It should be obvious from cause and effect it scientifically begins from this fundamental difference.

0

Glitched-Lies t1_ivdda4b wrote

No, but at this point there is still a knowledge of difference that could be described at many points of difference from cause and effect which is the important thing. Which is just scientifically knowing a difference in how the "AI" operate and "digital" apposed to what brains do.

1

Glitched-Lies t1_ivd9ofc wrote

The evidence is observed by the fact they are different to begin with. Computers can't be; a machine being conscious would be different than digital computers. That's what I meant. That's why I don't think this by Bostrom serves good purpose. It's settling ethics on something incomplete.

−5

Glitched-Lies t1_ivd6tmm wrote

Not much is lost. But the importance of consciousness and life being unique and precious may be lost a bit, if it's about taking it literal. Apposed to because of human mannerisms.

I'm not wrong with assumptions. That's not an assumption anyways.

−8

Glitched-Lies t1_ivd5jt4 wrote

Computers and brains just simply are physically phenomenally speaking, different. The physical relationships to consciousness are not the same. In the literal form they are different mechanics and different physical systems. Why would any just settle for what word relationships are used to how like a chatbot talks for instance or behavioralisms?

−10