radi-cho
radi-cho t1_jde80wh wrote
Reply to [N] ChatGPT plugins by Singularian2501
For people looking for open-source tools around the GPT-4 API, we're currently actively updating the list at https://github.com/radi-cho/awesome-gpt4. Feel free to check it out or contribute if you're a tool developer. I guess some of the ChatGPT plugins will be open-source as well.
radi-cho t1_jd8gdzt wrote
Reply to [P] CodeAlpaca Code and Data release by immune_star
Great. Congratulations. I was planning on attempting the same basically, so thanks for open-sourcing it:)
radi-cho OP t1_jb9gmgi wrote
Reply to comment by No-Intern2507 in [P] diffground - A simplistic Android UI to access ControlNet and instruct-pix2pix. by radi-cho
Thanks for the suggestion:)
radi-cho OP t1_jb4edeo wrote
Reply to comment by kryptoklob in [P] diffground - A simplistic Android UI to access ControlNet and instruct-pix2pix. by radi-cho
Stable Diffusion conditioned with Controlnet-Scribble or Controlnet-canny. For the editing option, instruct-pix2pix.
radi-cho OP t1_jb0oopy wrote
radi-cho OP t1_jazvwtm wrote
Reply to comment by Fickle_Dragonfly3090 in [P] diffground - A simplistic Android UI to access ControlNet and instruct-pix2pix. by radi-cho
More models will be added continuously. Depending on Apple's review timing and app usage, the iOS version might be coming in 1-2 weeks.
radi-cho OP t1_jaxm1qw wrote
Reply to [P] diffground - A simplistic Android UI to access ControlNet and instruct-pix2pix. by radi-cho
The app: https://play.google.com/store/apps/details?id=com.radicho.diffground
Much more models and an iOS version are coming soon:)
radi-cho OP t1_ja27qnn wrote
Reply to [R] [N] VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene Completion. by radi-cho
Paper: https://arxiv.org/pdf/2302.12251.pdf GitHub: https://github.com/nvlabs/voxformer
Abstract: Humans can easily imagine the complete 3D geometry of occluded objects and scenes. This appealing ability is vital for recognition and understanding. To enable such capability in AI systems, we propose VoxFormer, a Transformer-based semantic scene completion framework that can output complete 3D volumetric semantics from only 2D images. Our framework adopts a two-stage design where we start from a sparse set of visible and occupied voxel queries from depth estimation, followed by a densification stage that generates dense 3D voxels from the sparse ones. A key idea of this design is that the visual features on 2D images correspond only to the visible scene structures rather than the occluded or empty spaces. Therefore, starting with the featurization and prediction of the visible structures is more reliable. Once we obtain the set of sparse queries, we apply a masked autoencoder design to propagate the information to all the voxels by self-attention. Experiments on SemanticKITTI show that VoxFormer outperforms the state of the art with a relative improvement of 20.0% in geometry and 18.1% in semantics and reduces GPU memory during training by ~45% to less than 16GB.
radi-cho OP t1_j99fh5s wrote
Reply to comment by walkingsparrow in [R] [N] In this paper, we show how a conversational model, 3.5x smaller than SOTA, can be optimized to outperform the baselines through Auxiliary Learning. Published in the ACL Anthology: "Efficient Task-Oriented Dialogue Systems with Response Selection as an Auxiliary Task." by radi-cho
About the intuition that it would produce responses further from the human ones (in fact, we see that for this variant, the BLEU is lower) - in a way, it could work as a regularization to produce more diverse responses and prevent some overfitting. That loss mostly affects the additional head's weights which are removed during inference, but we also multiply it by an optimal constant to be sure it doesn't affect the whole architecture too much. I've sent you a PM if you wish to receive some more details or empirical insights.
radi-cho OP t1_j99eb0v wrote
Reply to comment by Cheap_Meeting in [R] [N] In this paper, we show how a conversational model, 3.5x smaller than SOTA, can be optimized to outperform the baselines through Auxiliary Learning. Published in the ACL Anthology: "Efficient Task-Oriented Dialogue Systems with Response Selection as an Auxiliary Task." by radi-cho
Thanks for the interest! You can follow me on Twitter: https://twitter.com/radi_cho
radi-cho OP t1_j96yj4l wrote
Reply to comment by __lawless in [R] [N] In this paper, we show how a conversational model, 3.5x smaller than SOTA, can be optimized to outperform the baselines through Auxiliary Learning. Published in the ACL Anthology: "Efficient Task-Oriented Dialogue Systems with Response Selection as an Auxiliary Task." by radi-cho
Just linked it in a top-level comment.
radi-cho OP t1_j96ydyf wrote
radi-cho OP t1_j8aqy5u wrote
Reply to [R] [P] OpenAssistant is a fully open-source chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so. by radi-cho
DALL-E was disrupted by Stable Diffusion, can OpenAssistant disrupt ChatGPT in your opinion?
radi-cho OP t1_j8aora0 wrote
Reply to [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
Paper: https://arxiv.org/abs/2302.04761
Implementation by lucidrains (in progress): https://github.com/lucidrains/toolformer-pytorch
radi-cho t1_jdz40zp wrote
Reply to [P] two copies of gpt-3.5 (one playing as the oracle, and another as the guesser) performs poorly on the game of 20 Questions (68/1823). by evanthebouncy
Last week I released a CLI that can do this at scale: https://github.com/radi-cho/datasetGPT. Will use personal funds to generate somewhat big task oriented dataset later today with gpt-3.5 or gpt-4. Will open source it along a way for people to contribute their own datasets so we can collect bigger ones. Would be helpful both for analysis of how LLMs work and for fine tuning downstream models (Alpaca-like).