Top-Perspective2560
Top-Perspective2560 t1_j9ukzgc wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
As others have said, the idea of being concerned with AI ethics and safety and taking it seriously is a good thing.
The problem is - and this is Just My Opinion™ - that people like EY are making what basically amount to spurious speculations about completely nebulous topics such as AGI, and they have very little to show in terms of some proof that they actually understand in technical detail where AI/ML is currently and the current SOTA. EY in particular seems to have jumped straight to those topics without any grounding in technical AI/ML research. I can't help but feel that, on some level at least, those topics were chosen based on the fact that it's easy to grab headlines and get into the media by making statements about it.
I'm not saying it's a bad thing to have people like EY around or that he or others like him are bad actors in any way, or that they shouldn't continue doing what they're doing. They may well be correct and their ideas aren't necessarily explicitly wrong. It's just that it's very difficult to genuinely take what they say seriously or make any practical decisions based on it, because a lot of it is so speculative. It reminds me a bit of Asimov's Laws of Robotics - they seemed like they made a lot of sense decades ago before anyone knew how the development of AI/ML would pan out, but in reality they're really just "it would be great if things worked this way" with no practical realistic plan on how to implement them, or even any way to know if they would actually be relevant.
The other thing is, as other people have pointed out, there are immediate and real problems with AI/ML as it stands, and solving those problems or avoiding disaster requires more than just making speculative statements. I think the absence of a will to address those issues by the biggest names in AI/ML ethics and safety is quite conspicuous.
​
Edit: Added a bit about Asimov's Laws of Robotics which occured to me after I made the post.
Top-Perspective2560 t1_j9jbpwq wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
We know how it works. Someone designed it. What he’s talking about is a lack of interpretability around what goes on in the hidden layers and why the model produces specific outputs. It’s not magic.
Top-Perspective2560 t1_j92c6w4 wrote
Reply to [D] Please stop by [deleted]
I’m aware that this is due to a high workload for the person moderating the sub, but I’d suggest a simple moratorium on chatGPT posts might be a good starting point. I believe you can automate that fairly easily based on post titles.
Top-Perspective2560 t1_j27fba8 wrote
Used one of the codes, thanks! Will leave a review
Top-Perspective2560 t1_j0ho0hv wrote
Reply to comment by tmblweeds in [P] Medical question-answering without hallucinating by tmblweeds
Sounds good! The table of treatments sounds like a good starting point - but further down the road it's definitely an issue of making sure that it actually corresponds to the model's "answer" somehow, because the advantage of providing it is to validate the output. Quite a lot of these issues around explainability are very deep-rooted in the models themselves - I'm sure you're familiar with the general state of play on that. However, there are definitely ways to take steps in the right direction.
If you'd like any input at any point feel free to fire over a DM!
Top-Perspective2560 t1_j0ez2z9 wrote
The first thing I’d say, and this is really important: You need to put a disclaimer on the site clearly stating that it’s not medical advice of any kind.
Explainability is always the sticking point in healthcare. This is pretty cool, but unless you can explicitly state why the model is giving that advice/output, it can never be truly useful, and worse, can open you up to all sorts of issues around accountability and liability. Tracing back to the original studies is a good thing, but doesn’t necessarily answer the question of why the model thinks that study should result in that advice.
Deep Learning models in healthcare are typically relegated to the realms of decision-support at best for the moment because of these issues. Even then, they’re often ignored by clinicians on the whole for a variety of reasons.
The methodology for determining what advice to give is quite shaky too. There is usually a bit more to answering these kinds of questions. What are the effect sizes given in the studies, for example? What kind of studies are they?
Anyway, I hope that doesn’t come across as overly-critical and is constructive in some way. AI/ML for healthcare can be a bit of a minefield, but it’s my area of research so just thought I’d pass on my thoughts.
Edit just to add: It would probably be really beneficial for you to talk to a clinician or even a med student about your project. From my experience, it's pretty much impossible to build effective tools or produce good, impactful research in this domain without input from actual clinicians.
Top-Perspective2560 t1_iyqu5e6 wrote
This isn’t really a fully developed thought, but how about an autoencoder of some kind? You would want it to accept the two halves of the image as input and reconstruct the full image.
Edit: yeah, seems like that rough approach has been used in this work:
Top-Perspective2560 t1_iykp2sa wrote
Reply to [p] Really Dumb Idea(bear with me) by poobispoob
I’m very into the outdoors too and was wondering about this very thing not long ago when I was trying to decide between ConCamo or PenCott Greenzone for my area.
I don’t know if you would need an ML-based solution to this. One way to frame the problem is that you would be trying to find a pattern that has the closest colour channels to the environment. I can’t remember my exact thinking now, but I think there may be simpler ways to assess this, and I’m not sure that there’s any “learning” required of the system. That said, a way to measure how good a match is could be to try and maximise the number of false negatives of a simple object detection network like Faster-RCNN when shown a given camo against multiple photos of the area intended for use.
Top-Perspective2560 t1_iwxj2es wrote
Reply to comment by dimenicklecnt in I found Walmart's $6.88 onn iPhone 14 Pro gel case pretty good. I got this as a short term case (till black Friday), but am considering keeping it. I find it fairly similar to the Otterbox Symmetry case on my iPhone X, that I upgraded from. by dimenicklecnt
Yeah, that was the other point for my one. Can’t remember if it was a Symmetry but must have at least been very similar to it because I know exactly what you mean.
Top-Perspective2560 t1_iwxdeis wrote
Reply to I found Walmart's $6.88 onn iPhone 14 Pro gel case pretty good. I got this as a short term case (till black Friday), but am considering keeping it. I find it fairly similar to the Otterbox Symmetry case on my iPhone X, that I upgraded from. by dimenicklecnt
I had a similar Otterbox case for my iPhone Xr. The failure point will be those joins between the silicone and plastic if you plan on taking it out at all. Still, it's a pretty good deal for $6.88.
Top-Perspective2560 t1_ivztc3b wrote
Reply to [D] Current Job Market in ML by diffusion-xgb
I think you have to bear in mind that MAANG exists in a different sphere than tech or AI/ML in general. There are plenty of companies who have a need for it, and that won’t change just because MAANG is getting rid of bloat. That said, jobs will probably get at least a bit more competitive because people being laid off will be looking for jobs.
Top-Perspective2560 t1_isnnpe5 wrote
Top-Perspective2560 t1_irusjqp wrote
Too broadly-defined. What do you mean by sentient, exactly? The dictionary definition from Oxford Languages is "able to perceive or feel things." You could argue that a photodiode is sentient.
Top-Perspective2560 t1_ir86fh8 wrote
Reply to comment by alesi_97 in [R] Google Colab alternative by Zatania
It may actually solve the problem. I’ve run into similar issues before.
Source: CompSci PhD. I use Colab a lot.
Top-Perspective2560 t1_ir5eoku wrote
Reply to comment by Zatania in [R] Google Colab alternative by Zatania
Try uploading it to your Google Drive first.
Then you can mount your drive in your notebook by using:
from google.colab import drive
drive.mount(“mnt”)
Run the cell and allow access to your Drive when the prompt appears.
In the files tab on the left-hand pane you should now see a folder called mnt listed which will contain the contents of your Google Drive. To get the path to a file you can just right click on the file>copy path.
Top-Perspective2560 t1_ir4pqs7 wrote
Reply to [R] Google Colab alternative by Zatania
Are you trying to load the file straight into Colab or are you mounting your Google drive and loading from there?
Top-Perspective2560 t1_j9umv6r wrote
Reply to comment by maxToTheJ in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
My research is in AI/ML for healthcare. One thing people forget is that everyone is concerned about AI/ML, and no-one is happy to completely delegate decision making to an ML model. Even where we have models capable of making accurate predictions, there are so many barriers to trust e.g. Black Box Problem and general lack of explainability which relegate these models to decision-support at best and being completely ignored at worst. I actually think that's a good thing to an extent - the barriers to trust are for the most part absolutely valid and rational.
However, the idea that these models are just going to be running amock is a bit unrealistic I think - people are generally very cautious of AI/ML, especially laymen.