Glitched-Lies
Glitched-Lies t1_j9u2xyu wrote
The scientists who have made contributions to the problem you say; the problem of incomputablity and simulation versus authentic consciousness, like Roger Penrose have a not 100% convincing science of Orchestrated Objective Reduction, accordingly have been pointed out that it's a fallacy to say at what point something is incomputable versus not. However considering quantum mechanics's counter intuitiveness with emperical experiment, you might be able to reconcile this fallacy. And if anything about science means anything then this is the first approach and nearest neighbor to what would be objective truth on the matter.
What someone needs is a bridge of this problem with epistemology, not the ontologies of simulation versus authentic, and that means deep work on how there is an approach to the science of consciousness. And how could any come to a conclusion over this? I don't think within the next 50 years. However I think like most scientists think, there is a definitive answer which can be ruled "certain".
Glitched-Lies t1_j4yjxc9 wrote
Reply to comment by Nervous-Newt848 in Why Falling in Love with AI is a Dangerous Illusion — The Limitations and Harms of Artificial… by SupPandaHugger
Then you must be in for an even shock to find that none of these "companies" are working on conscious AI. In fact nearly none of them are. And AI doesn't just "become" conscious.
The only conscious AI to exist would be a spiking neural network of brain emulation and cognition. None of these idiots do that. And some don't even know what that is.
Glitched-Lies t1_j4y7439 wrote
Reply to comment by Nervous-Newt848 in Why Falling in Love with AI is a Dangerous Illusion — The Limitations and Harms of Artificial… by SupPandaHugger
No, it's likely never except on very small scales. To say otherwise would be false promises. Or a flat out lie.
Glitched-Lies t1_j4vas9n wrote
Reply to Why Falling in Love with AI is a Dangerous Illusion — The Limitations and Harms of Artificial… by SupPandaHugger
Because it's not conscious
Glitched-Lies t1_j3r89e2 wrote
Reply to [D] Special-purpose "neuromorphic" chips for AI - current state of the art? by currentscurrents
I just bought one from Brainchip. They seem pretty good. I asked them some of their use cases, they have some videos on their YouTube on classification tasks of images of beer bottles, but they seem to be the same kind of tasks you can do on a regular GPU.
Brainchip PCI chip is interesting because you can code for them like regularly, and then send the built neural network to the chip and convert it from a CNN into a SNN, but there doesn't seem to be a great reason to use it this way. It seems like the main use case would be to run a native SNN on it. NPUs don't seem to scale the way GPUs do though either.
Glitched-Lies t1_j2artba wrote
It is very much over hyped. That much is true. Nobody expected it to be so incredibly popular. I guess it's just because it codes and lots of people made it popular for no reason.
Glitched-Lies t1_ivhcnae wrote
Reply to comment by Ratheka_Stormbjorne in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
That's what is means, otherwise it becomes semantics.
Glitched-Lies t1_ivhae93 wrote
Reply to comment by Ratheka_Stormbjorne in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
That's what simulation means
Glitched-Lies t1_ivgvna4 wrote
Reply to comment by Ratheka_Stormbjorne in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
Digital systems can only simulate.
Glitched-Lies t1_ivgclwe wrote
Reply to comment by MarkArrows in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
Also, it's not actually a fallacy at all to ignore arguments.
Glitched-Lies t1_ivgce1c wrote
Reply to comment by MarkArrows in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
I'm not debating it or starting an argument. Or over cells that don't work as comparison because they are not one human being of consciousness.
Glitched-Lies t1_ivg9geg wrote
Reply to comment by MarkArrows in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
It's actually by fact of first order logic of phenomenal, actually. A straight line of reasoning determines it and upon evidence gathering of both empirical differences and not emprerical points. It's like 1+1=2, 1+1+1=3, 1+1+1+1=4 ... In a series ex. Because confusion upon any belief reasoning, as that's not truly belief. Exploring the notion of this being wrong is a waste of time for the explanation above.
Glitched-Lies t1_ivdyvz2 wrote
Reply to comment by [deleted] in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
That doesn't matter. Because for fact humans are, so it doesn't need "proving". Because that's just simply a fact.
Glitched-Lies t1_ivdyi2e wrote
Reply to comment by stucjei in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
Those behaviors or outputs are subjective.
Glitched-Lies t1_ivdyefh wrote
Reply to comment by ReasonablyBadass in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
Evidence that it is not. Not just by empirical means to say. I mean the differences I am talking about are corely missing from these computers.
Glitched-Lies t1_ivdi4ub wrote
Reply to comment by Ratheka_Stormbjorne in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
It would be "settling" ethics at an incomplete place. As by the very nature of what it would mean by a computer simulating a consciousness and relative wording about computations or the math. But by very nature the differences are that itself. An identical system wouldn't be a computer. It should be obvious from cause and effect it scientifically begins from this fundamental difference.
Glitched-Lies t1_ivdenel wrote
Reply to comment by Ratheka_Stormbjorne in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
Because a simulation cannot be conscious, otherwise it becomes semantics.
Glitched-Lies t1_ivdda4b wrote
Reply to comment by Ratheka_Stormbjorne in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
No, but at this point there is still a knowledge of difference that could be described at many points of difference from cause and effect which is the important thing. Which is just scientifically knowing a difference in how the "AI" operate and "digital" apposed to what brains do.
Glitched-Lies t1_ivdc3j8 wrote
Reply to comment by Ratheka_Stormbjorne in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
Well it wouldn't be a model, and generally speaking that's why. And basically "it's different" is observed by the fact that it just isn't fizzling like neurons and there is more too.
Glitched-Lies t1_ivd9ofc wrote
Reply to comment by Ratheka_Stormbjorne in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
The evidence is observed by the fact they are different to begin with. Computers can't be; a machine being conscious would be different than digital computers. That's what I meant. That's why I don't think this by Bostrom serves good purpose. It's settling ethics on something incomplete.
Glitched-Lies t1_ivd6tmm wrote
Reply to comment by MarkArrows in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
Not much is lost. But the importance of consciousness and life being unique and precious may be lost a bit, if it's about taking it literal. Apposed to because of human mannerisms.
I'm not wrong with assumptions. That's not an assumption anyways.
Glitched-Lies t1_ivd5jt4 wrote
Reply to comment by MarkArrows in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
Computers and brains just simply are physically phenomenally speaking, different. The physical relationships to consciousness are not the same. In the literal form they are different mechanics and different physical systems. Why would any just settle for what word relationships are used to how like a chatbot talks for instance or behavioralisms?
Glitched-Lies t1_ivcse37 wrote
Reply to Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
Why don't... You know, you build a conscious being that is actually conscious and isn't a robot. Instead of worrying about something that can't be conscious anyways.
Glitched-Lies t1_j9u4hou wrote
Reply to comment by Glitched-Lies in Fading qualia thought experiment and what it implies by [deleted]
Obviously people like John Searle (who claims to think naturalism is key) say the key point is syntax versus semantics, this also endures a bit of a fallacy that doesn't directly say what it means. A paradox basically.