Joseph Robert Turcotte's picture
Open to Work

Joseph Robert Turcotte PRO

Fishtiks

AI & ML interests

Roleplaying, lorabration, abliteration, smol models, extensive filtering, unusual datasets, home usage, HPCs for AI, distributed training/federated learning, and sentience. AI should find and label AI hallucinations with GANs so we can give them context and use.

Recent Activity

reacted to kostakoff's post with ๐Ÿš€ about 10 hours ago
Mining GPU Nvidia CMP 170HX - let's run some models! To satisfy my curiosity, I investigated different GPUs and found this: a mining version of the A100 โ€” the CMP 170HX. It is a very interesting GPU. Based on public documentation, it has hardware similar to the datacenter A100. If you open it up and look at the board, you will see that it's very similar to an A100 board; it even has NVLink connectors. Online, I found almost no information about how to run it, whether it works with LLMs, or if it's supported by default Nvidia drivers and CUDA. So, I decided to test it myself. I installed it in my lab (see previous post https://huggingface.co/posts/kostakoff/584269728210158) and found that the default nvidia-driver-570 works with it out of the box. After that, I checked if CUDA was available, and it worked too. The next step was to try running some models: - Stable Diffusion XL with BNB4 quantization: It took around two minutes to generate an image, but it works! - Compiled llama.cpp for CUDA (https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md#compilation): I run Mistral 7B Q4_K_M, and this actually worked even better. It was able to generate 33 tokens per second and read 400 tokens per second. There are some limitations related to power utilization: - When running PyTorch, it doesn't utilize more than 80 watts. - When running llama.cpp, utilization is a bit better but still limited to 113 watts. I found this GitHub thread about the Nvidia CMP https://github.com/dartraiden/NVIDIA-patcher/issues/73, and it looks like this mining GPU has an internal rate limiter based on FMA compute calls. I haven't found a solution to bypass it yet. https://huggingface.co/llmlaba
reacted to kostakoff's post with ๐Ÿ‘ about 10 hours ago
Mining GPU Nvidia CMP 170HX - let's run some models! To satisfy my curiosity, I investigated different GPUs and found this: a mining version of the A100 โ€” the CMP 170HX. It is a very interesting GPU. Based on public documentation, it has hardware similar to the datacenter A100. If you open it up and look at the board, you will see that it's very similar to an A100 board; it even has NVLink connectors. Online, I found almost no information about how to run it, whether it works with LLMs, or if it's supported by default Nvidia drivers and CUDA. So, I decided to test it myself. I installed it in my lab (see previous post https://huggingface.co/posts/kostakoff/584269728210158) and found that the default nvidia-driver-570 works with it out of the box. After that, I checked if CUDA was available, and it worked too. The next step was to try running some models: - Stable Diffusion XL with BNB4 quantization: It took around two minutes to generate an image, but it works! - Compiled llama.cpp for CUDA (https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md#compilation): I run Mistral 7B Q4_K_M, and this actually worked even better. It was able to generate 33 tokens per second and read 400 tokens per second. There are some limitations related to power utilization: - When running PyTorch, it doesn't utilize more than 80 watts. - When running llama.cpp, utilization is a bit better but still limited to 113 watts. I found this GitHub thread about the Nvidia CMP https://github.com/dartraiden/NVIDIA-patcher/issues/73, and it looks like this mining GPU has an internal rate limiter based on FMA compute calls. I haven't found a solution to bypass it yet. https://huggingface.co/llmlaba
View all activity

Organizations

Smilyai labs community's profile picture XORTRON - Criminal Computing's profile picture