[FEEDBACK] Local apps
Please share your feedback about the Local Apps integration in model pages.
On compatible models , you'll be proposed to launch some local apps:
In your settings, you can configure the list of apps and their order:
The list of available local apps is defined in https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/local-apps.ts
I think the tensor-core fp16 FLOPS should be used for GPUs supporting that. I note that V100 counts as way less than the theoretical 125 TFLOPS, listed e.g. here: https://images.nvidia.com/content/technologies/volta/pdf/tesla-volta-v100-datasheet-letter-fnl-web.pdf
Hey! Have you guys heard of LangFlow? It is a neat solution for developing AI-powered apps as well!
The GPU list is missing the RTX A4000 (16GB)
Would be nice to get ollama integration
I suggest adding Ollama as local app to run LLM's
I use GPT4All and it is not listed herein
@kramp , I tried my m5 Macbook 32gb, an RTX 2070 and an AMD Ryzen 3000 era... same results for anything I tried.
Hi,
Missing from the hardware list:
- NVIDIA RTX™ 5000 Ada Generation Laptop GPU (16GB GDDR6) powers advanced AI with 682 AI TOPS
- Intel® Core™ i9 processor HX series (14th Gen)
Hey team, Merry Christmas! 🎄
A get a tiny bug on /settings/local-apps
- Hardware list doesn't persist after adding devices
- Page shows my hardware correctly but doesn't save to profile
Expected: auto-save on hardware addition
Actual: changes lost on page reload
Need VRAM estimation when browsing models!
EDIT: OK, I have the solution: delete everything and re-enter all informations...
Hey team, Merry Christmas! 🎄
A get a tiny bug on /settings/local-apps
- Hardware list doesn't persist after adding devices
- Page shows my hardware correctly but doesn't save to profile
Expected: auto-save on hardware addition
Actual: changes lost on page reloadNeed VRAM estimation when browsing models!
EDIT: OK, I have the solution: delete everything and re-enter all informations...
Same here. Trying to add my chrismas presents to see if I can run a larger model, but I am left to do math with my fingers again 😀 . Happy new year!
Same workaround, had to remove everything and add it again.
Hardware list doesn't persist after adding devices
The issue should be fixed now, for everybody
