Silicon-Dreamer

Silicon-Dreamer t1_j9oz230 wrote

My apologies, good question. I'm looking at the copyright registration fees for those working in the US, and apparently they have separate fee categories for single works & groups of works. I'm not sure what is defined by "group" here, how many constitutes a group, if there's a limit on it or not.

5

Silicon-Dreamer t1_j9ov5qx wrote

Despite arguing yes, AI art is art, it seems like a good idea to restrict its ability to be copyrighted because if it can, what is to stop an individual from renting a ton of GPUs, generating billions of "beautiful painting/character, 8k, etc etc" as a generic prompt, copyrighting them all, then threatening to sue anyone who generates anything that looks similar enough (in the eyes of a judge who doesn't know the technology well enough) to warrant a case?

43

Silicon-Dreamer t1_j7wt93d wrote

(I'm one of the people who believes the singularity is coming within my lifetime, before 2060 as a majority of experts believe, and that it will have a large positive impact)

That said, the post doesn't feel directed against me as a person, just directed against my position.

> ad ho·mi·nem

> (of an argument or reaction) directed against a person rather than the position they are maintaining.

Isn't there value in thinking about why we maintain our positive outlooks on AI development? (My reasoning, to be fair, stems largely from being abnormally healthy enough to estimate that I could easily live to see 2060, even without modifications).

3

Silicon-Dreamer t1_j6otqv6 wrote

I would disagree. Sam Altman in his StrictlyVC interview said,

> "One of the things we really believe is that the most responsible way to put this out in society is very gradually, and to get people, institutions, policymakers, get them familiar with it, thinking about the implications" ...

OpenAI has vast computing resources as we know, so before algorithmic advances allow open source, lower-compute groups to make (& inference) alternatives, their containment efforts accomplish Sam's goal very effectively -- making the release process more gradual for institutions/policymakers' sake.

We all know how slowly government operates at times, especially democracies that require consensus. It stands to reason then that we would agree if OpenAI's policy changed to completely release any new works ASAP, and if we assume there's ever any negative thing the new AI can do, government will not react before its already had a long impact. I won't argue my political views in this post, but it is worth noting that the negative thing... could be as benign as a few more spam emails..... or annihilation of the planet, and everything in between.

I really like this planet.

9

Silicon-Dreamer t1_ixtho5f wrote

I find it infinitely challenging to know how to talk about this subject online. Anyone else here feel the same way?

I wish them the best developing the bill, nuances and all. Trying to understand what the university analyses say on the societal effects of pornographic deepfakes, to be blunt, their conclusions are largely not positive. Given the track record of these scientific institutions (depending on the university of course but generally speaking, highly regarded like Rutgers & one analysis they have), it is one reason why I wrote my flair... How often have we come across perspectives online that push solely for adaptation, not regulation, in such tech circles? I don't mean to insult those who have that perspective, but given what most university analyses are giving on the issue, calling for some regulation, it seems only right. Sure I'd love a world where erotic/pornographic content can exist more freely, especially since I'm an ultra filthy cybernetically-modify-my-body-to-have-ten-penises-and-ten-vaginas-before-entering-the-next-stage-of-the-singularity pervert. It's just... how do we build such opportunity for all, rather than only empowering the already powerful like what the Rutgers analysis talks about?

17

Silicon-Dreamer t1_isqwy6i wrote

Certainly a step in the right direction I'd argue! I don't know what Isomorphic Labs' mission is, but to quote DeepMind's mission statement,

> Our long term aim is to solve intelligence, developing more general and capable problem-solving systems, known as artificial general intelligence (AGI).

23

Silicon-Dreamer t1_iqpie9q wrote

If I had to guess (not reliable here), it's that DeepMind is in a relatively different financial incentive situation compared to various other companies. Google AI, Tesla AI, Microsoft, etc, they have products to sell, stocks to consider. DeepMind kind of sort of has that with their WaveNet system being integrated into text to speech products, but it's very much a side concern. I'd say they're much more focused on just researching.

21