🖼️ Available 9 models from 1 repositories

Filter by type:

Filter by tags:
qwen2.5-32b
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters.

Repository: localaiLicense: apache-2.0

qwen2.5-32b-instruct
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters.

Repository: localaiLicense: apache-2.0

qwen2.5-32b-arliai-rpmax-v1.3
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations. Many RPMax users mentioned that these models does not feel like any other RP models, having a different writing style and generally doesn't feel in-bred.

Repository: localaiLicense: apache-2.0

qwen2.5-32b-rp-ink
A roleplay-focused LoRA finetune of Qwen 2.5 32b Instruct. Methodology and hyperparams inspired by SorcererLM and Slush. Yet another model in the Ink series, following in the footsteps of the Nemo one

Repository: localaiLicense: apache-2.0

dumpling-qwen2.5-32b
nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B finetuned on: nbeerbower/GreatFirewall-DPO nbeerbower/Schule-DPO nbeerbower/Purpura-DPO nbeerbower/Arkhaios-DPO jondurbin/truthy-dpo-v0.1 antiven0m/physical-reasoning-dpo flammenai/Date-DPO-NoAsterisks flammenai/Prude-Phi3-DPO Atsunori/HelpSteer2-DPO jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo.

Repository: localaiLicense: apache-2.0

tiger-lab_qwen2.5-32b-instruct-cft
Qwen2.5-32B-Instruct-CFT is a 32B parameter model fine-tuned using our novel Critique Fine-Tuning (CFT) approach. Built upon the Qwen2.5-32B-Instruct base model, this variant is trained to critique and analyze responses rather than simply imitate them, leading to enhanced reasoning capabilities.

Repository: localaiLicense: apache-2.0

subtleone_qwen2.5-32b-erudite-writer
This model is a merge using Rombos's top-ranked 32b model, based on Qwen 2.5, and merging three creative writing finetunes. The creative content is a serious upgrade over the base it started with and has a much more literary style than the previous Writer model. I won't call it better or worse, merely a very distinct flavor and style. I quite like it, and enjoin you to try it as well. Enjoy!

Repository: localaiLicense: apache-2.0

nbeerbower_dumpling-qwen2.5-32b-v2
nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B finetuned on: nbeerbower/GreatFirewall-DPO nbeerbower/Schule-DPO nbeerbower/Purpura-DPO nbeerbower/Arkhaios-DPO jondurbin/truthy-dpo-v0.1 antiven0m/physical-reasoning-dpo flammenai/Date-DPO-NoAsterisks flammenai/Prude-Phi3-DPO Atsunori/HelpSteer2-DPO jondurbin/gutenberg-dpo-v0.1 nbeerbower/gutenberg2-dpo nbeerbower/gutenberg-moderne-dpo.

Repository: localaiLicense: apache-2.0

azura-qwen2.5-32b-i1
This model was merged using the Model Stock merge method using nbeerbower/Dumpling-Qwen2.5-32B as a base. The following models were included in the merge: rinna/qwen2.5-bakeneko-32b-instruct EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2 zetasepic/Qwen2.5-32B-Instruct-abliterated-v2 nbeerbower/Dumpling-Qwen2.5-32B-v2

Repository: localaiLicense: apache-2.0