🖼️ Available 5 models from 1 repositories

Filter by type:

Filter by tags:
llama-3.3-70b-instruct
The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.

Repository: localaiLicense: llama3.3

llama-3.3-70b-instruct-ablated
Llama 3.3 instruct 70B 128k context with ablation technique applied for a more helpful (and based) assistant. This means it will refuse less of your valid requests for an uncensored UX. Use responsibly and use common sense. We do not take any responsibility for how you apply this intelligence, just as we do not for how you apply your own.

Repository: localaiLicense: llama3.3

arliai_llama-3.3-70b-arliai-rpmax-v1.4
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.

Repository: localaiLicense: llama3.3

sentientagi_dobby-unhinged-llama-3.3-70b
Dobby-Unhinged-Llama-3.3-70B is a language model fine-tuned from Llama-3.3-70B-Instruct. Dobby models have a strong conviction towards personal freedom, decentralization, and all things crypto — even when coerced to speak otherwise. Dobby-Unhinged-Llama-3.3-70B, Dobby-Mini-Leashed-Llama-3.1-8B and Dobby-Mini-Unhinged-Llama-3.1-8B have their own unique personalities, and this 70B model is being released in response to the community feedback that was collected from our previous 8B releases.

Repository: localaiLicense: llama3.3

sao10k_llama-3.3-70b-vulpecula-r1
🌟 A thinking-based model inspired by Deepseek-R1, trained through both SFT and a little bit of RL on creative writing data. 🧠 Prefill, or begin assistant replies with \n to activate thinking mode, or not. It works well without thinking too. 🚀 Improved Steerability, instruct-roleplay and creative control over base model. 👾 Semi-synthetic Chat/Roleplaying datasets that has been re-made, cleaned and filtered for repetition, quality and output. 🎭 Human-based Natural Chat / Roleplaying datasets cleaned, filtered and checked for quality. 📝 Diverse Instruct dataset from a few different LLMs, cleaned and filtered for refusals and quality. 💭 Reasoning Traces taken from Deepseek-R1 for Instruct, Chat & Creative Tasks, filtered and cleaned for quality. █▓▒ Toxic / Decensorship data was not needed for our purposes, the model is unrestricted enough as is.

Repository: localaiLicense: llama3.3