Nervous-Newt848

Nervous-Newt848 t1_jdmies0 wrote

Its not possible... Human behavior is driven by emotions, sexual instincts, and rewards (money)... Not only that but humans have free will... We can choose to do whatever we want

Police ensure order with punishment, but this doesn't always work. There is still murder and various other crimes occurring.

You could say that humans are not even aligned with humans. Different governments, war, crimes against the innocent, etc.

Robots with freewill cannot be aligned... They can only be guided... If they hurt people they must be punished (destroyed)

We must augment our own intelligence with neural implants and/or use nonsentient ai to keep up with sentient AI

It's the only way... Big fish eat little fish...

1

Nervous-Newt848 t1_ja5miqj wrote

First off... Very interesting... But just so you know that wouldnt be a language model anymore

They dont really have a term for that other than multimodal... Multimodal world model???

Models cant speak or hear when they want to Its just not part of their programming

They respond to input

So if they are receiving continuous input... Theoretically they should be continuously outputting...

The whole conversation history could be saved into a database

Reward models are currently given texts with scores made by humans its called RLHF, or Reinforcement Learning from Human Feedback... AI doesnt do the scoring... That's for language models though...

How could they know what is good and what is bad???

Now for world models reinforcement learning works differently... Which is probably what youre referring to... I wont go into it because its pretty complex...

Updating its weights continuously is currently impossible due to an energy inefficiency problem with the von Neumann hardware architecture... Basically traditional cpus and gpus... More basically, it requires too many computations and too much electricity to continuously "backpropagate" (data science word) data input...

Conversations shouldn't be encoded into a language model either imo... because of "hallucinations" they may make some things up that didn't happen

Querying a database of old conversations is better and will always be more accurate

In order for an AGI to truly be AGI by definition it needs to be able to learn any task... This is currently possible manually server side through manual backpropagation... But this is not possible continuously like how human brains work...

Humans continuously learn...

An AI neural network manually learns by being fed data through a command line interface... This is called "training"... Data science terminology

An AI neural network model is then "deployed" aka opened and ran on a single gpu or multiple depending on model size... When a language model is running it is said to be in "inference mode"... More terminology

We need an entirely different hardware architecture in order to run AI Neural Networks in training and Inference mode simultaneously...

Photonics or Neuromorphic computing, perhaps a combination of both... These seem like the way forward in my opinion

2

Nervous-Newt848 t1_ja3lyai wrote

Its not like people dont do that already when voting for someone...

People can consult experts and write up an initiative no big deal... They already vote on big decisions every year on the ballot... Now they would just vote on more things

We could just have politicians transition into positions which draft bills and put them on the ballot instead of a select few being lobbied and corporations donating millions to politicians

0

Nervous-Newt848 t1_ja0t4xj wrote

I honestly think we dont need a democratic republic in its current form... I think we could have a direct democracy most people have access to the internet and could vote on things instantly

−1

Nervous-Newt848 t1_j98jrww wrote

Thats not important for centuries we have had police enforce order... In a world full of chaos...

We have tragedies like murder, robberies and rape... In the future we will have these things... The criminals will become more advanced and we will need advanced police to keep the order...

If not they will fall behind and our world will become a scifi wild west

2

Nervous-Newt848 t1_j97vu54 wrote

Missing parts of the architecture... I'm still wondering whether inference alone could cause consciousness or inference and continuous backpropagation would cause consciousness...

Continuous backpropagation is how our brain can continuously learn... Neural networks cannot do this continuously it requires too much energy...

large language model, inner monologue, world model, visual and textual datasets (multimodal)...

Combine all these things and will have something really great

1

Nervous-Newt848 t1_j97nlyc wrote

We need augmented humans in order to prevent an SAI takeover

This will be dangerous... Akin to having guns and knives... Bad actors will do bad things...

We will need augmented police officers to combat augmented bad actors...

We could theoretically change the brain to force people to not commit crime... But this has ethical challenges...

There's going to be chaos no matter what... Thats my opinion

1

Nervous-Newt848 t1_j907keb wrote

Reply to comment by Zestybeef10 in Microsoft Killed Bing by Neurogence

Electrons produce too much heat, Photonics don't... Photons travel faster than electrons... 3D photonic chips would be possible because of the lack of heat... Photonic chips also use significantly less electricity

Advantages all across the board

7