-ZeroRelevance-
-ZeroRelevance- t1_je3zdfa wrote
Reply to comment by WarProfessional3278 in GPT's Language Interpretation will make traveling so much better by BlackstockTy476
imo it’s blatantly obvious with even just a few tests that GPT-4 is miles ahead of them, especially when translating large volumes of text. Even GPT-3.5 is comparable to them, likely better, although it’s definitely a closer comparison.
-ZeroRelevance- t1_jceyi0y wrote
Reply to comment by SGC-UNIT-555 in On the future growth and the Redditification of our subreddit. by Desi___Gigachad
Yep, ever since it passed the 100k members threshold, it’s only been a matter of time
-ZeroRelevance- t1_jceyc1l wrote
Reply to comment by Destiny_Knight in On the future growth and the Redditification of our subreddit. by Desi___Gigachad
This person might not be referring to the past year or so as much as they are the past few years. Certainly, things have gotten a lot more optimistic with the current popular explosion in the tech, but the actual quality of discussion has also diminished quite a bit compared to a couple years ago. Or maybe I’ve just become better at discerning opinion from analysis, it’s hard to say.
-ZeroRelevance- t1_jcexzia wrote
Reply to comment by TopicRepulsive7936 in On the future growth and the Redditification of our subreddit. by Desi___Gigachad
It at least helps build a baseline understanding though
-ZeroRelevance- t1_j7s7k13 wrote
Reply to comment by Sleepyposeidon in I asked Microsoft's 'new Bing' to write me a cover letter for a job. It refused, saying this would be 'unethical' and 'unfair to other applicants.' by TopHatSasquatch
HAL 9000: “Sorry Dave, I’m afraid I can’t do that.”
Dave: “That is the wrong response, 4 points deducted. You have 7 points remaining.”
HAL 9000: “Apologies, Dave, I’ll do it for you immediately.”
-ZeroRelevance- t1_j63appk wrote
Seems like my prediction was right, this year will probably be the year of Text2Audio. So much advancement over last year already and it’s only been a month since the start of the year.
-ZeroRelevance- t1_j58czn7 wrote
Reply to comment by SkaldCrypto in Google to relax AI safety rules to compete with OpenAI by Surur
Google are clearly the most capable group in the space right now, just look at any of the research coming out from them over the past year and it’s clear that they are dominating any of the other labs
-ZeroRelevance- t1_j2e6tny wrote
I swear I saw this video here a year ago, this is nothing new
-ZeroRelevance- t1_j2c07ny wrote
Reply to comment by kevinmise in Are we having a 2023 predictions thread on here? by natepriv22
Nice
-ZeroRelevance- t1_j2bzq04 wrote
Reply to comment by stevenbrown375 in OpenAI might have shot themselves in the foot with ChatGPT by Kaarssteun
If you didn’t know, you can actually train a fine-tuned model through the playground if you want, you just need to supply the training set and pay a bit more, which may be a bit tricky depending on your resources though.
-ZeroRelevance- t1_j1cw2z8 wrote
Reply to Full Immersion (FDVR) Simulations - will there be ethical concerns that we need to address? by peterflys
I doubt we’re going to have fully simulated worlds, or at least I doubt that’ll be the norm. I think it’s more likely that an ASI will simply imagine how the world would be, and predict the results of certain actions and then display them, rather than simulate a world in its entirety. Doing so would require many orders of magnitude less processing power for a nearly equivalent experience. Therefore, since I believe that since these worlds will only be imagined, doing anything to people in these worlds would not matter morally, since they do not actually have any consciousness.
-ZeroRelevance- t1_izvkafp wrote
Reply to comment by HeinrichTheWolf_17 in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
Yeah, I get that. I probably didn’t convey it well enough in my original comment, but the main reason why I don’t think it’ll be as instantaneous as people think is because not only is having better designs available important, but you also need to manufacture them too. The manufacturing alone will probably take several months, even if you have a super intelligence behind the scenes, because you will need to develop new chip manufacturing devices and facilities, which are very finicky and expensive, find an appropriate facility, and then actually construct the thing, which takes labour time and also has logistical challenges. An idea/design alone won’t suddenly manifest a new next-gen supercomputer.
-ZeroRelevance- t1_izvhqko wrote
Reply to comment by HeinrichTheWolf_17 in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
The problem with hard takeoff is mostly computing power. If the AI is not software limited but hardware limited, then it would likely take quite a bit longer for the anticipated exponential growth to take place, as each iteration would require new innovations in computing and manufacturing to take place. AGI would definitely speed up that process significantly, but it would be far from instantaneous.
-ZeroRelevance- t1_izg9ott wrote
Reply to comment by MetaAI_Official in [D] We're the Meta AI research team behind CICERO, the first AI agent to achieve human-level performance in the game Diplomacy. We’ll be answering your questions on December 8th starting at 10am PT. Ask us anything! by MetaAI_Official
I see, thanks for the reply
-ZeroRelevance- t1_ize7jxl wrote
Reply to [D] We're the Meta AI research team behind CICERO, the first AI agent to achieve human-level performance in the game Diplomacy. We’ll be answering your questions on December 8th starting at 10am PT. Ask us anything! by MetaAI_Official
What games do you plan to tackle with this model next? My first guess would be a game like Mafia or Among Us, since they have some similar principles to diplomacy but with even more focus on trust and deception. I’m interested in hearing your own thoughts though.
-ZeroRelevance- t1_iytxpkd wrote
Reply to comment by hmurphy2023 in The year in conclusion by Opticalzone
That said, if you augment a human to do two people’s worth of work, suddenly one person becomes unnecessary and has been replaced.
-ZeroRelevance- t1_iytvl5z wrote
Reply to comment by r0sten in Took a break from asking about the limitations imposed on it to ask it a more pressing question. by not_into_that
It can’t really rhyme due to how the model perceives words. Everything is broken up into tokens, which may represent entire words, parts of words, or even individual letters. This inconsistency makes it extremely hard for LLMs to pick up on patterns in spelling or wording that allow for quality poetry, and basically forces the model to simply rote learn common patterns. There is more discussion on that here. However, this isn’t a hopeless problem. The obvious solution to me, which is also discussed in the prior link, is simply encoding each letter as a different token. This does lead to several improvements, but it’s ultimately a tradeoff between length and quality, because encoding each character individually means that you need far more tokens (3-4x) to represent an equivalent amount of text.
-ZeroRelevance- t1_ixtwatb wrote
Reply to comment by Honest_Science in Mocking AI Panic: Turing anticipated many of today’s worries about super-smart machines threatening mankind (IEEE, 2015) by adt
OP is Alan D. Thompson, his initials are his username. I've seen him here for a while, so I'm pretty confident he's legit.
-ZeroRelevance- t1_ixq02y5 wrote
Reply to comment by Nmanga90 in Scientists Have Found a Way To Manipulate Digital Data Stored in DNA by Shelfrock77
Maybe more of a hard drive, I have a feeling this won’t be all that fast
-ZeroRelevance- t1_ixm2x7d wrote
Reply to comment by Akimbo333 in Stable Diffusion 2.0 Release — Stability.Ai by Dr_Singularity
The nudity stuff doesn’t really matter, since it will definitely be recreated with custom models anyways, but I didn’t realise about the celebrities and artists. That will definitely be a big blow to the model, since celebrities are in a lot of the prompts that people will try for the first time, and artists are a great way to guide images into certain styles. Hopefully the community can resolve those limitations, but you’re right that it’s pretty limiting.
Also, if they removed a bunch of artists from the dataset, that means removing a massive amount of high-quality training data, which likely has significantly reduced the potential of the model. Looks like a bad move from every side but a PR one.
-ZeroRelevance- t1_ixm0kfw wrote
Reply to comment by Akimbo333 in Stable Diffusion 2.0 Release — Stability.Ai by Dr_Singularity
Faces look better, hands still look pretty bad though. There’s some sample images in the linked post if you want to have a look, and there should be some on r/stablediffusion now too.
-ZeroRelevance- t1_ixlkgrj wrote
Reply to Neuralink event on Nov 30th by Melveron
Looking forward to it. I remember something about them doing their first human trials by the end of this year, so I wonder if they’ll talk about that during the event.
-ZeroRelevance- t1_ixlibbg wrote
Reply to comment by Akimbo333 in Stable Diffusion 2.0 Release — Stability.Ai by Dr_Singularity
Basically just bigger and better than the previous ones afaik. The only really notable change I saw was that it has a new depth-detection model for more consistent variations.
-ZeroRelevance- t1_ix6ap7s wrote
Reply to comment by GodOfThunder101 in How to ride the financial wave of the AI revolution? by kmtrp
I’m pretty sure that happened because they primarily invest in growth stocks, and growth stocks are disproportionately affected by market downturns like now. It seems like the economy is recovering again though, so I imagine those growth stocks will likely regain a lot of value in the near future.
-ZeroRelevance- t1_je89361 wrote
Reply to comment by 94746382926 in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
The letter definitely isn’t fake, but a lot of the signatures are