i-heart-turtles

i-heart-turtles t1_iy4sitd wrote

I see many people in this thread dislike umap for downstream tasms, but I don't see any good justification. I think umap is still regarded as one state of the art for top-down manifold learning among many academic researchers for deriving topologically correct embeddings with small local distortion.

For bottom-up manifold learning, I think the state of the art are based on local tangent space alignment. Here is one recent paper Low Distortion Laplacian Eigenmaps: https://www.jmlr.org/papers/v22/21-0131.html

In real life, the big issue for nonlinear dl is usually noise.

4

i-heart-turtles t1_iusf0zy wrote

Generally I think there should be more efficient ways of doing what you want without having to compute the full Jacobian- people do similar things in adversarial robustness so you can have a look.

https://arxiv.org/abs/1907.02610

https://arxiv.org/abs/1901.08573

I think you should check the stuff on evaluating for disentanglement. This paper could also be useful for u: https://arxiv.org/abs/1812.06775. For vae disentanglement better Jacobian is close to orthogonal than just small norm.

1

i-heart-turtles t1_isozegx wrote

I'm not a google fanboy or anything, but I was under the impression that the colab compute units was basically how colab operated before via gcp, it was just hidden from the user... now the system just shows you explicitly your quota. I personally appreciate more being told exactly how much compute I have left in my bucket so I can manage more carefully and be less wasteful.

Anyways I like the colab frontend a lot & integration with google drive more then the other offerings I tried, but it's just my opinion.

19