Text Generation
Transformers
Safetensors
Chinese
English
joyai_llm_flash
conversational
custom_code
Eval Results
Instructions to use jdopensource/JoyAI-LLM-Flash with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use jdopensource/JoyAI-LLM-Flash with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="jdopensource/JoyAI-LLM-Flash", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("jdopensource/JoyAI-LLM-Flash", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use jdopensource/JoyAI-LLM-Flash with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "jdopensource/JoyAI-LLM-Flash" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jdopensource/JoyAI-LLM-Flash", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/jdopensource/JoyAI-LLM-Flash
- SGLang
How to use jdopensource/JoyAI-LLM-Flash with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "jdopensource/JoyAI-LLM-Flash" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jdopensource/JoyAI-LLM-Flash", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "jdopensource/JoyAI-LLM-Flash" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jdopensource/JoyAI-LLM-Flash", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use jdopensource/JoyAI-LLM-Flash with Docker Model Runner:
docker model run hf.co/jdopensource/JoyAI-LLM-Flash
vllm serve cli options raise errors
#13
by vBaiCai - opened
as descriped in https://huggingface.co/jdopensource/JoyAI-LLM-Flash/blob/main/docs/deploy_guidance.md
# TP8 for extreme speed and long context
vllm serve ${MODEL_PATH} --tp 8 --trust-remote-code \
--tool-call-parser qwen3_coder --enable-auto-tool-choice \
--speculative-config $'{"method": "mtp", "num_speculative_tokens": 3}'
The CLI option --tp 8 is not valid. The correct argument is --tensor-parallel-size 8
The --speculative-config flag is not recognized by the current vLLM CLI. How to fix it ? ( below is the error message, using offical docker )
APIServer pid=352) INFO 02-24 04:35:31 [model.py:1661] Using max model len 131072
(APIServer pid=352) You are using a model of type joyai_llm_flash to instantiate a model of type deepseek_v3. This is not supported for all configurations of models and can yield errors.
(APIServer pid=352) INFO 02-24 04:35:31 [model.py:514] Resolved architecture: DeepseekV3ForCausalLM
(APIServer pid=352) INFO 02-24 04:35:31 [model.py:1661] Using max model len 131072
(APIServer pid=352) Traceback (most recent call last):
(APIServer pid=352) File "/usr/local/bin/vllm", line 10, in <module>
(APIServer pid=352) sys.exit(main())
(APIServer pid=352) ^^^^^^
(APIServer pid=352) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/main.py", line 73, in main
(APIServer pid=352) args.dispatch_function(args)
(APIServer pid=352) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/serve.py", line 60, in cmd
(APIServer pid=352) uvloop.run(run_server(args))
(APIServer pid=352) File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 96, in run
(APIServer pid=352) return __asyncio.run(
(APIServer pid=352) ^^^^^^^^^^^^^^
(APIServer pid=352) File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
(APIServer pid=352) return runner.run(main)
(APIServer pid=352) ^^^^^^^^^^^^^^^^
(APIServer pid=352) File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
(APIServer pid=352) return self._loop.run_until_complete(task)
(APIServer pid=352) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=352) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=352) File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 48, in wrapper
(APIServer pid=352) return await main
(APIServer pid=352) ^^^^^^^^^^
(APIServer pid=352) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1398, in run_server
(APIServer pid=352) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=352) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1417, in run_server_worker
(APIServer pid=352) async with build_async_engine_client(
(APIServer pid=352) ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=352) File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=352) return await anext(self.gen)
(APIServer pid=352) ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=352) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 172, in build_async_engine_client
(APIServer pid=352) async with build_async_engine_client_from_engine_args(
(APIServer pid=352) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=352) File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=352) return await anext(self.gen)
(APIServer pid=352) ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=352) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 198, in build_async_engine_client_from_engine_args
(APIServer pid=352) vllm_config = engine_args.create_engine_config(usage_context=usage_context)
(APIServer pid=352) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=352) File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1581, in create_engine_config
(APIServer pid=352) speculative_config = self.create_speculative_config(
(APIServer pid=352) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=352) File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1300, in create_speculative_config
(APIServer pid=352) return SpeculativeConfig(**self.speculative_config)
(APIServer pid=352) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=352) File "/usr/local/lib/python3.12/dist-packages/pydantic/_internal/_dataclasses.py", line 121, in __init__
(APIServer pid=352) s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
(APIServer pid=352) File "/usr/local/lib/python3.12/dist-packages/vllm/config/speculative.py", line 380, in __post_init__
(APIServer pid=352) raise NotImplementedError(
(APIServer pid=352) NotImplementedError: Speculative decoding with draft model is not supported yet. Please consider using other speculative decoding methods such as ngram, medusa, eagle, or mtp.
vBaiCai changed discussion status to closed