Crazy-Space5384
Crazy-Space5384 t1_iuf6325 wrote
Reply to [D] Looking for suggestions on setting up autoscaling on GPU servers for AI inference (without kubernetes)? by fgp121
virtualized or bare metal ? Running on a cloud provider or on your own premises?
Crazy-Space5384 t1_isf1r74 wrote
Reply to comment by midasp in [D] Could a ML model be used for Image Compression? by midasp
But so does traditional data compression. So it‘s to be proven that an ML model gets closer to the entropy limit - given that the model must be transferred alongside the encoded text given the size restriction of the decompressor binary.
Crazy-Space5384 t1_iseeqex wrote
Reply to comment by WikiSummarizerBot in [D] Could a ML model be used for Image Compression? by midasp
But they limit the size of the decompressor executable so that it cannot contain a priori knowledge about the text corpus. Meaning you can‘t include a pretrained network…
Crazy-Space5384 t1_iqoozdm wrote
I’d take the 2060 only if I found a particularly good deal - otherwise it’s the 3060.
Crazy-Space5384 t1_j0l53oc wrote
Reply to TIFU by "ruining" a friendship by Wafflesfortheday
That’s when you take the plane to Cabo to party alone.