Silicon-Dreamer
Silicon-Dreamer t1_j9ov5qx wrote
Despite arguing yes, AI art is art, it seems like a good idea to restrict its ability to be copyrighted because if it can, what is to stop an individual from renting a ton of GPUs, generating billions of "beautiful painting/character, 8k, etc etc" as a generic prompt, copyrighting them all, then threatening to sue anyone who generates anything that looks similar enough (in the eyes of a judge who doesn't know the technology well enough) to warrant a case?
Silicon-Dreamer t1_j7wt93d wrote
Reply to comment by GayHitIer in The copium goes both ways by IndependenceRound453
(I'm one of the people who believes the singularity is coming within my lifetime, before 2060 as a majority of experts believe, and that it will have a large positive impact)
That said, the post doesn't feel directed against me as a person, just directed against my position.
> ad ho·mi·nem
> (of an argument or reaction) directed against a person rather than the position they are maintaining.
Isn't there value in thinking about why we maintain our positive outlooks on AI development? (My reasoning, to be fair, stems largely from being abnormally healthy enough to estimate that I could easily live to see 2060, even without modifications).
Silicon-Dreamer t1_j6otqv6 wrote
Reply to comment by Ezekiel_W in Is AI censorship an obstacle to its usefulness? by EVJoe
I would disagree. Sam Altman in his StrictlyVC interview said,
> "One of the things we really believe is that the most responsible way to put this out in society is very gradually, and to get people, institutions, policymakers, get them familiar with it, thinking about the implications" ...
OpenAI has vast computing resources as we know, so before algorithmic advances allow open source, lower-compute groups to make (& inference) alternatives, their containment efforts accomplish Sam's goal very effectively -- making the release process more gradual for institutions/policymakers' sake.
We all know how slowly government operates at times, especially democracies that require consensus. It stands to reason then that we would agree if OpenAI's policy changed to completely release any new works ASAP, and if we assume there's ever any negative thing the new AI can do, government will not react before its already had a long impact. I won't argue my political views in this post, but it is worth noting that the negative thing... could be as benign as a few more spam emails..... or annihilation of the planet, and everything in between.
I really like this planet.
Silicon-Dreamer t1_j28wrxv wrote
Reply to Revolutionary machine learning weather simulator by DeepMind & Google’s ML-Based "GraphCast" outperforms top global forecasting system. GraphCast can generate accurate 10-day forecasts at a resolution of 25 km in under 60 seconds. by vegita1022
Original link, for those interested.
Love to see it & will expect more resilient economic processes from this.
Silicon-Dreamer t1_j0gdauu wrote
Reply to comment by GeneralZain in Can We Please Stop Being Disparaging Towards Artists And Others Affected By AI? by apinkphoenix
> if somebody acts dumb, I'm gonna call em dumb...
And on this subject, that makes the situation better, definitely not encouraging in-group/out-group bias & polarization, how?
Silicon-Dreamer t1_ixtho5f wrote
I find it infinitely challenging to know how to talk about this subject online. Anyone else here feel the same way?
I wish them the best developing the bill, nuances and all. Trying to understand what the university analyses say on the societal effects of pornographic deepfakes, to be blunt, their conclusions are largely not positive. Given the track record of these scientific institutions (depending on the university of course but generally speaking, highly regarded like Rutgers & one analysis they have), it is one reason why I wrote my flair... How often have we come across perspectives online that push solely for adaptation, not regulation, in such tech circles? I don't mean to insult those who have that perspective, but given what most university analyses are giving on the issue, calling for some regulation, it seems only right. Sure I'd love a world where erotic/pornographic content can exist more freely, especially since I'm an ultra filthy cybernetically-modify-my-body-to-have-ten-penises-and-ten-vaginas-before-entering-the-next-stage-of-the-singularity pervert. It's just... how do we build such opportunity for all, rather than only empowering the already powerful like what the Rutgers analysis talks about?
Silicon-Dreamer t1_ix21py8 wrote
Reply to comment by No-Shopping-3980 in Do you think religion will survive the 2030s? by [deleted]
I generally agree, though could you/anyone please explain the freeing from morality aspect? Even on this I agree a lot of examples within the realm of morality will likely fall to the wayside, though I would assume there would be other examples we would have no matter how advanced. Not blowing up planets?
Silicon-Dreamer t1_itd29gw wrote
Reply to comment by HeinrichTheWolf_17 in Could AGI stop climate change? by Weeb_Geek_7779
I tend to agree, though in case anyone reading this is outside the US & doesn't follow US events, there's a rather stark division on who support EVs versus who do not. (After typing "electric vehicles site:congress.gov" into search, there's a noticeable trend... Would recommend not trusting my word alone, see for yourself.
Silicon-Dreamer t1_isqwy6i wrote
Reply to comment by imlaggingsobad in A new AI model can accurately predict human response to novel drug compounds by Dr_Singularity
Certainly a step in the right direction I'd argue! I don't know what Isomorphic Labs' mission is, but to quote DeepMind's mission statement,
> Our long term aim is to solve intelligence, developing more general and capable problem-solving systems, known as artificial general intelligence (AGI).
Submitted by Silicon-Dreamer t3_xwduzb in singularity
Silicon-Dreamer t1_iqpie9q wrote
Reply to comment by HyperImmune in Dramatron: Co-Writing Screenplays and Theatre Scripts with Language Models (DeepMind) by nick7566
If I had to guess (not reliable here), it's that DeepMind is in a relatively different financial incentive situation compared to various other companies. Google AI, Tesla AI, Microsoft, etc, they have products to sell, stocks to consider. DeepMind kind of sort of has that with their WaveNet system being integrated into text to speech products, but it's very much a side concern. I'd say they're much more focused on just researching.
Silicon-Dreamer t1_j9oz230 wrote
Reply to comment by duboispourlhiver in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
My apologies, good question. I'm looking at the copyright registration fees for those working in the US, and apparently they have separate fee categories for single works & groups of works. I'm not sure what is defined by "group" here, how many constitutes a group, if there's a limit on it or not.