gpt-oss release coming soon :)
t.d.a.g. PRO
sequelbox
AI & ML interests
open source, infinite games. (they/them)
Recent Activity
replied to
their
post
about 15 hours ago
NEW RELEASE: it's here! Meet the newest member of the Valiant crew: Guardpoint, our new medical reasoning model!
- Trained on medical knowledge, management, diagnosis, and tasks from DeepSeek-V3.2-Speciale!
- Structured medical reasoning responses are efficient and informative, cutting token costs for faster inference!
- Wide-ranging knowledge base: trained on a wide variety of medical disciplines, patient types, and query structures!
- High quality medical responses emphasize performance, brevity, specificity, statistical rationality, and openness.
Get it now:
Guardpoint for Qwen 3 32B: https://huggingface.co/ValiantLabs/Qwen3-32B-Guardpoint
Guardpoint for Qwen 3 14B: https://huggingface.co/ValiantLabs/Qwen3-14B-Guardpoint
Powered by our new structured medical reasoning dataset: https://huggingface.co/datasets/sequelbox/Superpotion-DeepSeek-V3.2-Speciale
We've been working hard on Guardpoint; we're really excited to share it with everyone!
We'll be bringing Guardpoint to more models soon, along with further releases for the Shining Valiant and Esper series!
Get our experimental models: https://huggingface.co/collections/sequelbox/experimental-reasoning-models
Get our reasoning datasets: https://huggingface.co/collections/sequelbox/reasoning-datasets
Help support our releases, donations used for our experimental models and datasets: https://huggingface.co/spaces/sequelbox/SupportOpenSource
2026 is going to be an amazing year for open source AI! It's time for the AI revolution you need; from the bottom up, built together by all of us.
for love, friendship, and better days,
allegra
liked
a model
about 15 hours ago
mradermacher/Valiant-Vanta-8B-Dark-Fusion-i1-GGUF
liked
a model
about 15 hours ago
mradermacher/Qwen3-4B-Valiant-Polaris-i1-GGUF
Organizations
replied to
their
post
about 15 hours ago
reacted to
mmhamdy's
post with ๐ฅ
9 days ago
Post
3001
The new DeepSeek Engram paper is super fun! It also integrates mHC, and I suspect they're probably releasing all these papers to make the V4 report of reasonable length๐
Here's a nice short summary from Gemini
Here's a nice short summary from Gemini
posted
an
update
11 days ago
Post
2572
NEW RELEASE: it's here! Meet the newest member of the Valiant crew: Guardpoint, our new medical reasoning model!
- Trained on medical knowledge, management, diagnosis, and tasks from DeepSeek-V3.2-Speciale!
- Structured medical reasoning responses are efficient and informative, cutting token costs for faster inference!
- Wide-ranging knowledge base: trained on a wide variety of medical disciplines, patient types, and query structures!
- High quality medical responses emphasize performance, brevity, specificity, statistical rationality, and openness.
Get it now:
Guardpoint for Qwen 3 32B: ValiantLabs/Qwen3-32B-Guardpoint
Guardpoint for Qwen 3 14B: ValiantLabs/Qwen3-14B-Guardpoint
Powered by our new structured medical reasoning dataset: sequelbox/Superpotion-DeepSeek-V3.2-Speciale
We've been working hard on Guardpoint; we're really excited to share it with everyone!
We'll be bringing Guardpoint to more models soon, along with further releases for the Shining Valiant and Esper series!
Get our experimental models: https://huggingface.co/collections/sequelbox/experimental-reasoning-models
Get our reasoning datasets: https://huggingface.co/collections/sequelbox/reasoning-datasets
Help support our releases, donations used for our experimental models and datasets: sequelbox/SupportOpenSource
2026 is going to be an amazing year for open source AI! It's time for the AI revolution you need; from the bottom up, built together by all of us.
for love, friendship, and better days,
allegra
- Trained on medical knowledge, management, diagnosis, and tasks from DeepSeek-V3.2-Speciale!
- Structured medical reasoning responses are efficient and informative, cutting token costs for faster inference!
- Wide-ranging knowledge base: trained on a wide variety of medical disciplines, patient types, and query structures!
- High quality medical responses emphasize performance, brevity, specificity, statistical rationality, and openness.
Get it now:
Guardpoint for Qwen 3 32B: ValiantLabs/Qwen3-32B-Guardpoint
Guardpoint for Qwen 3 14B: ValiantLabs/Qwen3-14B-Guardpoint
Powered by our new structured medical reasoning dataset: sequelbox/Superpotion-DeepSeek-V3.2-Speciale
We've been working hard on Guardpoint; we're really excited to share it with everyone!
We'll be bringing Guardpoint to more models soon, along with further releases for the Shining Valiant and Esper series!
Get our experimental models: https://huggingface.co/collections/sequelbox/experimental-reasoning-models
Get our reasoning datasets: https://huggingface.co/collections/sequelbox/reasoning-datasets
Help support our releases, donations used for our experimental models and datasets: sequelbox/SupportOpenSource
2026 is going to be an amazing year for open source AI! It's time for the AI revolution you need; from the bottom up, built together by all of us.
for love, friendship, and better days,
allegra
replied to
their
post
about 2 months ago
put up a quick merge of the SV3 and Esper 3.1 Ministrals: https://huggingface.co/sequelbox/Ministral-3-14B-Reasoning-2512-PlumEsper1.1
posted
an
update
about 2 months ago
Post
3028
Two new releases today!
Firstly, our new Raiden-Mini dataset, powered by DeepSeek's newest deepseek-ai/DeepSeek-V3.2-Speciale model!
- A V3.2-Speciale reasoning showcase: the Raiden prompts test the model's creative, analytic, and general reasoning skills!
- HEAD TO HEAD: a comparison subset pits V3.2-Speciale against V3.2 with the same prompts, providing a direct look at each model's advantages!
Get the new Raiden-Mini dataset: sequelbox/Raiden-Mini-DeepSeek-V3.2-Speciale
On the model side, we've also brought Shining Valiant 3 to Ministral 3!
- Science-reasoning: sequelbox/Celestia3-DeepSeek-R1-0528 for physics, biology, chemistry, compsci, astronomy, Earth science, and information theory.
- AI to build AI: the sequelbox/Mitakihara-DeepSeek-R1-0528 dataset for high-quality reasoning performance on AI, MLOps, math and CUDA, complex adaptive and agentic systems, cognition, logic, linguistics, simulation, knowledge management, and more!
- Creative reasoning and general chat performance supplemented with sequelbox/Raiden-DeepSeek-R1
Get the newest SV3: ValiantLabs/Ministral-3-14B-Reasoning-2512-ShiningValiant3
Esper 3.1 is available for Ministral 3 as well: ValiantLabs/Ministral-3-14B-Reasoning-2512-Esper3.1
We're working hard on our next Big New Release, coming out in the next few weeks :)
Help support our releases, donations used for models and datasets: sequelbox/SupportOpenSource
Open source matters. Fight for it with us.
with love and friendship,
allegra
Firstly, our new Raiden-Mini dataset, powered by DeepSeek's newest deepseek-ai/DeepSeek-V3.2-Speciale model!
- A V3.2-Speciale reasoning showcase: the Raiden prompts test the model's creative, analytic, and general reasoning skills!
- HEAD TO HEAD: a comparison subset pits V3.2-Speciale against V3.2 with the same prompts, providing a direct look at each model's advantages!
Get the new Raiden-Mini dataset: sequelbox/Raiden-Mini-DeepSeek-V3.2-Speciale
On the model side, we've also brought Shining Valiant 3 to Ministral 3!
- Science-reasoning: sequelbox/Celestia3-DeepSeek-R1-0528 for physics, biology, chemistry, compsci, astronomy, Earth science, and information theory.
- AI to build AI: the sequelbox/Mitakihara-DeepSeek-R1-0528 dataset for high-quality reasoning performance on AI, MLOps, math and CUDA, complex adaptive and agentic systems, cognition, logic, linguistics, simulation, knowledge management, and more!
- Creative reasoning and general chat performance supplemented with sequelbox/Raiden-DeepSeek-R1
Get the newest SV3: ValiantLabs/Ministral-3-14B-Reasoning-2512-ShiningValiant3
Esper 3.1 is available for Ministral 3 as well: ValiantLabs/Ministral-3-14B-Reasoning-2512-Esper3.1
We're working hard on our next Big New Release, coming out in the next few weeks :)
Help support our releases, donations used for models and datasets: sequelbox/SupportOpenSource
Open source matters. Fight for it with us.
with love and friendship,
allegra
replied to
their
post
about 2 months ago
we've added 14b and 3b as well - we'd like to specifically recommend the 14b for everyone to try: https://huggingface.co/ValiantLabs/Ministral-3-14B-Reasoning-2512-Esper3.1
reacted to
danielhanchen's
post with ๐ฅ
about 2 months ago
Post
3831
Mistral's new Ministral 3 models can now be Run & Fine-tuned locally! (16GB RAM)
Ministral 3 have vision support and the best-in-class performance for their sizes.
14B Instruct GGUF: unsloth/Ministral-3-14B-Instruct-2512-GGUF
14B Reasoning GGUF: unsloth/Ministral-3-14B-Reasoning-2512-GGUF
๐ฑ Step-by-step Guide: https://docs.unsloth.ai/new/ministral-3
All GGUFs, BnB, FP8 etc. variants uploads: https://huggingface.co/collections/unsloth/ministral-3
Ministral 3 have vision support and the best-in-class performance for their sizes.
14B Instruct GGUF: unsloth/Ministral-3-14B-Instruct-2512-GGUF
14B Reasoning GGUF: unsloth/Ministral-3-14B-Reasoning-2512-GGUF
๐ฑ Step-by-step Guide: https://docs.unsloth.ai/new/ministral-3
All GGUFs, BnB, FP8 etc. variants uploads: https://huggingface.co/collections/unsloth/ministral-3
reacted to
sergiopaniego's
post with ๐
about 2 months ago
Post
2255
ICYMI, transformers v5 is out!
Grab a coffee โ and go read the announcement blog https://huggingface.co/blog/transformers-v5
Grab a coffee โ and go read the announcement blog https://huggingface.co/blog/transformers-v5
posted
an
update
about 2 months ago
Post
2096
NEW RELEASE: Esper 3.1 for Ministral 3 14b, 8b, and 3b!
- Esper is our full-stack, full-cycle coding, DevOps, and architecture specialist!
- Our newest, best DeepSeek technical datasets emphasize more challenging queries and tough real-world coding tasks across a variety of programming languages and development paradigms:
- Titanium 3 for coding and reasoning in DevOps and architecture: sequelbox/Titanium3-DeepSeek-V3.1-Terminus
- Tachibana 3 for high-difficulty code production in a variety of topics and programming languages:
- sequelbox/Tachibana3-Part1-DeepSeek-V3.1-Terminus
- sequelbox/Tachibana3-Part2-DeepSeek-V3.2
- Mitakihara for MLOps, AI building, use, expertise, and research: sequelbox/Mitakihara-DeepSeek-R1-0528
Get Esper 3.1 now in all 3 Ministral 3 sizes! (We recommend 14b for general use.)
14b: ValiantLabs/Ministral-3-14B-Reasoning-2512-Esper3.1
8b: ValiantLabs/Ministral-3-8B-Reasoning-2512-Esper3.1
3b: ValiantLabs/Ministral-3-3B-Reasoning-2512-Esper3.1
We'll be bringing more models to Ministral soon, including Shining Valiant 3 :)
We're currently working hard on a big release in a new specialty - hoping to have that up on Valiant Labs before the end of the year! We'll keep pushing the boundaries of what personal-sized AI can do for you.
See our Experimental Reasoning models and open-source datasets: @sequelbox
Help us keep working for open source AI with a donation: sequelbox/SupportOpenSource
with love,
allegra
- Esper is our full-stack, full-cycle coding, DevOps, and architecture specialist!
- Our newest, best DeepSeek technical datasets emphasize more challenging queries and tough real-world coding tasks across a variety of programming languages and development paradigms:
- Titanium 3 for coding and reasoning in DevOps and architecture: sequelbox/Titanium3-DeepSeek-V3.1-Terminus
- Tachibana 3 for high-difficulty code production in a variety of topics and programming languages:
- sequelbox/Tachibana3-Part1-DeepSeek-V3.1-Terminus
- sequelbox/Tachibana3-Part2-DeepSeek-V3.2
- Mitakihara for MLOps, AI building, use, expertise, and research: sequelbox/Mitakihara-DeepSeek-R1-0528
Get Esper 3.1 now in all 3 Ministral 3 sizes! (We recommend 14b for general use.)
14b: ValiantLabs/Ministral-3-14B-Reasoning-2512-Esper3.1
8b: ValiantLabs/Ministral-3-8B-Reasoning-2512-Esper3.1
3b: ValiantLabs/Ministral-3-3B-Reasoning-2512-Esper3.1
We'll be bringing more models to Ministral soon, including Shining Valiant 3 :)
We're currently working hard on a big release in a new specialty - hoping to have that up on Valiant Labs before the end of the year! We'll keep pushing the boundaries of what personal-sized AI can do for you.
See our Experimental Reasoning models and open-source datasets: @sequelbox
Help us keep working for open source AI with a donation: sequelbox/SupportOpenSource
with love,
allegra
reacted to
danielhanchen's
post with โค๏ธ
about 2 months ago
Post
8565
Qwen3-Next can now be Run locally! (30GB RAM)
Instruct GGUF: unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF
The models come in Thinking and Instruct versions and utilize a new architecture, allowing it to have ~10x faster inference than Qwen32B.
๐ Step-by-step Guide: https://docs.unsloth.ai/models/qwen3-next
Thinking GGUF: unsloth/Qwen3-Next-80B-A3B-Thinking-GGUF
Instruct GGUF: unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF
The models come in Thinking and Instruct versions and utilize a new architecture, allowing it to have ~10x faster inference than Qwen32B.
๐ Step-by-step Guide: https://docs.unsloth.ai/models/qwen3-next
Thinking GGUF: unsloth/Qwen3-Next-80B-A3B-Thinking-GGUF
posted
an
update
about 2 months ago
Post
343
We strongly disagree with Hugging Face's decision to remove the Epstein files dataset. As an open source community, it is imperative that we support free access to important information.
Torrents remain available for those looking to use the information, but ease of access matters too. The datasets library provides legitimate value to users; it matters to be able to access content here.
We'd like to encourage everyone to retain local copies of anything on Hugging Face that's important to you.
Torrents remain available for those looking to use the information, but ease of access matters too. The datasets library provides legitimate value to users; it matters to be able to access content here.
We'd like to encourage everyone to retain local copies of anything on Hugging Face that's important to you.
posted
an
update
2 months ago
Post
2539
NEW RELEASE: UML Generator is here!
- Our newest Experimental Reasoning release: create Unified Modeling Language diagrams to provide analysis and insight into your queries and situations!
- Multi-step reasoning reliably identifies diagram structure before a user response of XMI 2.5.1 code containing the UML diagram. Load the diagram into the UML tool of your choice!
- Trained in a variety of subjects for flexible analysis: software architecture, software development, business processes, systems engineering, data modeling, microservices, reverse engineering and more!
UML Generator available for multiple sizes of gpt-oss and Qwen 3, to provide increased flexibility to the user:
gpt-oss-120b: sequelbox/gpt-oss-120b-UML-Generator
gpt-oss-20b: sequelbox/gpt-oss-20b-UML-Generator
Qwen3-14B: sequelbox/Qwen3-14B-UML-Generator
Qwen3-4B-Thinking-2507: sequelbox/Qwen3-4B-Thinking-2507-UML-Generator
You can also get the UML Generator dataset, to train your own models to use UML Generator Format: sequelbox/UML-Generator-Dataset-DeepSeek-V3.2
Support our experimental open-source research efforts, models and datasets: sequelbox/SupportOpenSource
See our other Experimental Reasoning models: https://huggingface.co/collections/sequelbox/experimental-reasoning-models
with love,
allegra
- Our newest Experimental Reasoning release: create Unified Modeling Language diagrams to provide analysis and insight into your queries and situations!
- Multi-step reasoning reliably identifies diagram structure before a user response of XMI 2.5.1 code containing the UML diagram. Load the diagram into the UML tool of your choice!
- Trained in a variety of subjects for flexible analysis: software architecture, software development, business processes, systems engineering, data modeling, microservices, reverse engineering and more!
UML Generator available for multiple sizes of gpt-oss and Qwen 3, to provide increased flexibility to the user:
gpt-oss-120b: sequelbox/gpt-oss-120b-UML-Generator
gpt-oss-20b: sequelbox/gpt-oss-20b-UML-Generator
Qwen3-14B: sequelbox/Qwen3-14B-UML-Generator
Qwen3-4B-Thinking-2507: sequelbox/Qwen3-4B-Thinking-2507-UML-Generator
You can also get the UML Generator dataset, to train your own models to use UML Generator Format: sequelbox/UML-Generator-Dataset-DeepSeek-V3.2
Support our experimental open-source research efforts, models and datasets: sequelbox/SupportOpenSource
See our other Experimental Reasoning models: https://huggingface.co/collections/sequelbox/experimental-reasoning-models
with love,
allegra
reacted to
salma-remyx's
post with ๐ฅ
3 months ago
Post
3325
We've built over 10K containerized reproductions of papers from arXiv!
Instead of spending all day trying to build an environment to test that new idea, just pull the Docker container from the Remyx registry.
And with Remyx, you can start experimenting faster by generating a test PR in your codebase based on the ideas found in your paper of choice.
Hub: https://hub.docker.com/u/remyxai
Remyx docs: https://docs.remyx.ai/resources/ideate
Coming soon, explore reproduced papers with AG2 + Remyx: https://github.com/ag2ai/ag2/pull/2141
Instead of spending all day trying to build an environment to test that new idea, just pull the Docker container from the Remyx registry.
And with Remyx, you can start experimenting faster by generating a test PR in your codebase based on the ideas found in your paper of choice.
Hub: https://hub.docker.com/u/remyxai
Remyx docs: https://docs.remyx.ai/resources/ideate
Coming soon, explore reproduced papers with AG2 + Remyx: https://github.com/ag2ai/ag2/pull/2141
replied to
their
post
3 months ago
reacted to
umarbutler's
post with ๐
3 months ago
Post
2982
I'm excited to announce the release of Kanon 2 Embedder, the world's best legal embedding model, ranked first on the Massive Legal Embedding Benchmark ๐
This model is the product of quite literally months of painstaking work alongside @abdurrahmanbutler collecting, cleaning, and processing terabytes of data as well as coming up with novel improvements to the standard embedder training recipe to push the limits of what's possible.
Kanon 2 Embedder is my most advanced model to date. On MLEB, it benchmarks as 9% more accurate than OpenAI's best embedding model and 30% faster.
Even when truncated from 1,792 to 768 dimensions, Kanon 2 Embedder continues to hold the number one spot on MLEB.
Importantly, Kanon 2 Embedder is also privacy and security friendly โ unlike Voyage, Cohere and Jina, none of your data is used to train our models by default.
Kanon 2 Embedder can also be self-hosted for enterprises with heightened security or reliability requirements.
You can read the full announcement on our blog to learn how we did it and how you can get started using Kanon 2 Embedder to embed your own legal documents: https://isaacus.com/blog/introducing-kanon-2-embedder
This model is the product of quite literally months of painstaking work alongside @abdurrahmanbutler collecting, cleaning, and processing terabytes of data as well as coming up with novel improvements to the standard embedder training recipe to push the limits of what's possible.
Kanon 2 Embedder is my most advanced model to date. On MLEB, it benchmarks as 9% more accurate than OpenAI's best embedding model and 30% faster.
Even when truncated from 1,792 to 768 dimensions, Kanon 2 Embedder continues to hold the number one spot on MLEB.
Importantly, Kanon 2 Embedder is also privacy and security friendly โ unlike Voyage, Cohere and Jina, none of your data is used to train our models by default.
Kanon 2 Embedder can also be self-hosted for enterprises with heightened security or reliability requirements.
You can read the full announcement on our blog to learn how we did it and how you can get started using Kanon 2 Embedder to embed your own legal documents: https://isaacus.com/blog/introducing-kanon-2-embedder
reacted to
tomaarsen's
post with ๐
3 months ago
Post
4404
๐ค Sentence Transformers is joining Hugging Face! ๐ค This formalizes the existing maintenance structure, as I've personally led the project for the past two years on behalf of Hugging Face! Details:
Today, the Ubiquitous Knowledge Processing (UKP) Lab is transferring the project to Hugging Face. Sentence Transformers will remain a community-driven, open-source project, with the same open-source license (Apache 2.0) as before. Contributions from researchers, developers, and enthusiasts are welcome and encouraged. The project will continue to prioritize transparency, collaboration, and broad accessibility.
Read our full announcement for more details and quotes from UKP and Hugging Face leadership: https://huggingface.co/blog/sentence-transformers-joins-hf
We see an increasing wish from companies to move from large LLM APIs to local models for better control and privacy, reflected in the library's growth: in just the last 30 days, Sentence Transformer models have been downloaded >270 million times, second only to transformers.
I would like to thank the UKP Lab, and especially Nils Reimers and Iryna Gurevych, both for their dedication to the project and for their trust in myself, both now and two years ago. Back then, neither of you knew me well, yet you trusted me to take the project to new heights. That choice ended up being very valuable for the embedding & Information Retrieval community, and I think this choice of granting Hugging Face stewardship will be similarly successful.
I'm very excited about the future of the project, and for the world of embeddings and retrieval at large!
Today, the Ubiquitous Knowledge Processing (UKP) Lab is transferring the project to Hugging Face. Sentence Transformers will remain a community-driven, open-source project, with the same open-source license (Apache 2.0) as before. Contributions from researchers, developers, and enthusiasts are welcome and encouraged. The project will continue to prioritize transparency, collaboration, and broad accessibility.
Read our full announcement for more details and quotes from UKP and Hugging Face leadership: https://huggingface.co/blog/sentence-transformers-joins-hf
We see an increasing wish from companies to move from large LLM APIs to local models for better control and privacy, reflected in the library's growth: in just the last 30 days, Sentence Transformer models have been downloaded >270 million times, second only to transformers.
I would like to thank the UKP Lab, and especially Nils Reimers and Iryna Gurevych, both for their dedication to the project and for their trust in myself, both now and two years ago. Back then, neither of you knew me well, yet you trusted me to take the project to new heights. That choice ended up being very valuable for the embedding & Information Retrieval community, and I think this choice of granting Hugging Face stewardship will be similarly successful.
I'm very excited about the future of the project, and for the world of embeddings and retrieval at large!
reacted to
AdinaY's
post with ๐
3 months ago
Post
2683
HunyuanWorld Mirror๐ฅa versatile feed forward model for universal 3D world reconstruction by Tencent
tencent/HunyuanWorld-Mirror
โจ Any prior in โ 3D world out
โจ Mix camera, intrinsics, depth as priors
โจ Predict point clouds, normals, Gaussians & more in one pass
โจ Unified architecture for all 3D task
tencent/HunyuanWorld-Mirror
โจ Any prior in โ 3D world out
โจ Mix camera, intrinsics, depth as priors
โจ Predict point clouds, normals, Gaussians & more in one pass
โจ Unified architecture for all 3D task
reacted to
paulml's
post with ๐ฅ
3 months ago
Post
3631
Qwen3-VL-4B is incredibly easy to fine-tune!
We've trained the first DSE model based on this model, and it's already performing at the same level as Jina v4!
While Jina Embeddings v4 is built on Qwen2.5-VL-3B (which has a non-commercial license), our model is based on Qwen3-VL-4B and released under Apache 2.0โmaking it fully commercially permissive.
Check out our DSE model here:
racineai/QwenAmann-4B-dse
We've trained the first DSE model based on this model, and it's already performing at the same level as Jina v4!
While Jina Embeddings v4 is built on Qwen2.5-VL-3B (which has a non-commercial license), our model is based on Qwen3-VL-4B and released under Apache 2.0โmaking it fully commercially permissive.
Check out our DSE model here:
racineai/QwenAmann-4B-dse
posted
an
update
3 months ago
Post
2536
NEW RELEASE: Esper 3.1 for gpt-oss-20b!
- Esper is our full-stack, full-cycle coding, DevOps, and architecture specialist!
- Our newest, best DeepSeek technical datasets emphasize more challenging queries and tough real-world coding tasks across a variety of programming languages and development paradigms:
- Titanium 3 for coding and reasoning in DevOps and architecture: sequelbox/Titanium3-DeepSeek-V3.1-Terminus
- Tachibana 3 for high-difficulty code production in a variety of topics and programming languages:
- sequelbox/Tachibana3-Part1-DeepSeek-V3.1-Terminus
- sequelbox/Tachibana3-Part2-DeepSeek-V3.2
- Mitakihara for MLOps, AI building, use, expertise, and research: sequelbox/Mitakihara-DeepSeek-R1-0528
GET IT NOW, FOR EVERYONE: ValiantLabs/gpt-oss-20b-Esper3.1
We'll have more releases of Esper coming up, plus more experimental open-source releases :) find open source datasets and experimental models at @sequelbox
Help us keep working for open source AI with a donation: sequelbox/SupportOpenSource
more to come soon!
allegra
- Esper is our full-stack, full-cycle coding, DevOps, and architecture specialist!
- Our newest, best DeepSeek technical datasets emphasize more challenging queries and tough real-world coding tasks across a variety of programming languages and development paradigms:
- Titanium 3 for coding and reasoning in DevOps and architecture: sequelbox/Titanium3-DeepSeek-V3.1-Terminus
- Tachibana 3 for high-difficulty code production in a variety of topics and programming languages:
- sequelbox/Tachibana3-Part1-DeepSeek-V3.1-Terminus
- sequelbox/Tachibana3-Part2-DeepSeek-V3.2
- Mitakihara for MLOps, AI building, use, expertise, and research: sequelbox/Mitakihara-DeepSeek-R1-0528
GET IT NOW, FOR EVERYONE: ValiantLabs/gpt-oss-20b-Esper3.1
We'll have more releases of Esper coming up, plus more experimental open-source releases :) find open source datasets and experimental models at @sequelbox
Help us keep working for open source AI with a donation: sequelbox/SupportOpenSource
more to come soon!
allegra
posted
an
update
4 months ago
Post
1513
NEW RELEASE: Esper 3.1!
- Esper is our full-stack, full-cycle coding, DevOps, and architecture specialist!
- Our newest, best DeepSeek technical datasets emphasize more challenging queries and tough real-world coding tasks across a variety of programming languages and development paradigms:
- Titanium 3 for coding and reasoning in DevOps and architecture: sequelbox/Titanium3-DeepSeek-V3.1-Terminus
- Tachibana 3 for high-difficulty code production in a variety of topics and programming languages:
- sequelbox/Tachibana3-Part1-DeepSeek-V3.1-Terminus
- sequelbox/Tachibana3-Part2-DeepSeek-V3.2
- Mitakihara for MLOps, AI building, use, expertise, and research: sequelbox/Mitakihara-DeepSeek-R1-0528
Our first release in the Esper 3.1 series is built on Qwen3-4B-Thinking-2507. GET IT NOW, FOR EVERYONE: ValiantLabs/Qwen3-4B-Thinking-2507-Esper3.1
We'll be bringing Esper 3.1 to more, larger models as soon as we can; you can help this happen faster with a donation: sequelbox/SupportOpenSource
We're really happy about this one; let us know how Esper 3.1 works for you!
Support open source. It's our only hope for an AI future you'll actually want to live in.
More to come soon!
with our love and appreciation,
allegra
- Esper is our full-stack, full-cycle coding, DevOps, and architecture specialist!
- Our newest, best DeepSeek technical datasets emphasize more challenging queries and tough real-world coding tasks across a variety of programming languages and development paradigms:
- Titanium 3 for coding and reasoning in DevOps and architecture: sequelbox/Titanium3-DeepSeek-V3.1-Terminus
- Tachibana 3 for high-difficulty code production in a variety of topics and programming languages:
- sequelbox/Tachibana3-Part1-DeepSeek-V3.1-Terminus
- sequelbox/Tachibana3-Part2-DeepSeek-V3.2
- Mitakihara for MLOps, AI building, use, expertise, and research: sequelbox/Mitakihara-DeepSeek-R1-0528
Our first release in the Esper 3.1 series is built on Qwen3-4B-Thinking-2507. GET IT NOW, FOR EVERYONE: ValiantLabs/Qwen3-4B-Thinking-2507-Esper3.1
We'll be bringing Esper 3.1 to more, larger models as soon as we can; you can help this happen faster with a donation: sequelbox/SupportOpenSource
We're really happy about this one; let us know how Esper 3.1 works for you!
Support open source. It's our only hope for an AI future you'll actually want to live in.
More to come soon!
with our love and appreciation,
allegra