" JanitorAI Tutorial for Dummies, for Beginners " | Writing Assistant Bot
These are a compilation of my discoveries in using JanitorAI.
TOC:
Setting up interaction using JLLM.
Setting up interaction using Other LLM Models (Deepseek, Llama, etc.).
Which One of the LLM Models I Should Choose?
Creating bots.
Tutorial using this "Writing Assistant" bot.
Editing your profile.
•
•
How to Set Up Interaction Using JLLM
For beginner, you should try this LLM from Janitor. It has its pros and cons, but it's worth a try, and it's free. Here's a quick rundown of the setup:
•
SETTING IN CHAT
After selected a chat from whichever bot it is, here the settings:
.
•> API Settings
"JanitorLLM Beta" is the default LLM. Then, click the {Advanced Prompts}. For the prompts, here the collection I recommended: kolach3, av.rose, and molek.
For me, I mixed prompts from those sources, and you can add some additionals from your personal preferences.
.
•> Generation Settings
For Single Character Bots
{Temperature} = 0 - 0,7
{Max new tokens} = 200 - 500, or 0
For {Temperature}, the lower more logical, the higher more creative, but it can be chaotic if it is too high. I prefer to set it lower than 1,0. But you can experiment it yourself.
For {Max New Tokens}, it's the limit of how much tokens will generate. If you set the max tokens with 0, it'll be unlimited, and if you set the number, that's the maximum of how much token will generated, and maybe it can cutted on the last sentence of the last paragraph.
If you set the max new tokens value too much or unlimited, there may be cases, where the characters would self-talking too long, or talked in your stead. If you don't want that, set the number low, preventing that to happen. But If the sentence wasn't complete in the last paragraph, edit the responses. Click edit, and add some information according to your imaginations on how you want the characters will closing the last sentence.
But recently, I prefer set it at 0 (unlimited), so you can get results without truncated. If you think the response has too much information, you also can just edit it. Click edit and cut the response on where you want to response back. Do this every time you found the bots asking too much or do too many actions in just one response.
And for every character I made, actually I'll always inform my recommendation generation setting for them. Because, the settings are not always the same for each characters.
For Multiple Characters Bots
{Temperature} = ?
{Max new tokens} = ?
I still haven't figured out the right setting for multiple characters type. Maybe I'll updated this information later, when I have figured that out.
•
OTHER SETTINGS
.
•> My Personas
This settings can be found here, Clik: Your photo profile at the corner of your screen → {My Personas} → {+ Add New}, or {v} (if you already have one and want to edit it).
Then add the photo profile and the name, referring to your preferences. And.. here the crucial part: At the appearance, suggested if you use this formula:
[{{user}}= YOURNAME,
Pronoun= ,
Appearance= .]
For the example:
[{{user}}= Bubu,
Pronoun= she/her,
Appearance= A female, with a long blonde hair and blue eyes.]
With this setting, it can prevent the characters from forgetting your name or your pronouns. And keep in mind to write down just the most importance parts and keep it simple, to make easier for the AI to process your information into the roleplay.
If the characters misgender you, or you prefer other pronouns, my tips is edit the last response from the characters, edit every incorrect pronoun or spelling. And in the next chats, you will find characters write their responses more accurately, because they will refer to the writing from the previous chat.
•
•
How to Set Up Interaction Using Other LLM Models (Deepseek, Llama, etc)
If you have get used with JLLM, but want something more, maybe because the generated chats so far are a bit generic—like every character feel same, or maybe you just want to experiment and try other LLMs. And there are more detailed explanation about why you should try other LLM modes, read these ones: Deepseek Guide bot, and molek.
And in here, let's just cut to the steps! Here is the guide to set it up:
•
GETTING YOUR API KEY
Before you can set the new LLM Models in you Janitor chat setting, you should get your API Key first. There are two different methods:
.
•> API Key via Chutes AI
Getting API Key from Chutes AI is recommended if you want using:
Deepseek V3-0324, DeepSeek V3-Base, DeepSeek V3, Llama 4-Maverick, etc.
Using the API Key from Chutes AI will allow you to feel the experience of using those LLM models in free, while there is no limitation so far when I'm using them.
If you want to see the steps in images, check this out. But here the rundown steps:
>> Open Chutes AI. Then register or create account. Enter your desired username. Then, they will give you a "fingerprint key". Copy the key, and store it in save place, as they are necessary if you want to log in from other devices. I suggest you save it in notes that can sync in many devices, because the key is pretty long, so next time you want to log in you just need to copy paste.
>> After you have log in, click the "three balls" logo on the site to create your personal API Key. Scroll down, then click {+ Create API Key} → write your desired name → {+ Create} → You get your personal API Key.
Make sure you copy paste it all. If you open the site in your phone, after you long pressed the code → SELECT ALL. If you open in PC or laptop, click the code, then [Ctrl+A]. Why? Because, if you're wrong in copy it, not copying all of this code, then the proxy will fails to connect. So you MUST really copy this key as given.
And make sure to save your API Key in save place, as you can't see the key anymore if you have close the pop up. And it's better if you save it in sync notes too, because every time you open your Janitor account in different devices, you need to insert the key again.
.
•> API Key via Chutes AI and Open Router
Getting API Key from Chutes AI and Open Router is recommended if you want using:
Deepseek R1-0528, Qwen 3, Microsoft Mai-ds-r1, etc.
There are some models using reasoning in generating their chats, and if you're using those models directly from Chutes AI, they will show their thinking and reasoning process without filter. That's good, but will making their response too long. While if you directly use the API Key from Open Router, you will get 50 messages limit per day, which will reduce your chatting experience.
Using the API Key from Chutes AI and integrated it with Open Router, will allow you to feel the experience of using those Reasoning LLM models in free, while there is no limitation so far when I'm using them.
The source of this tips is from Here. And if you want to see the steps in a video, check this out, but make sure to skip Targon setting steps, as they are unnecessary from my experience. And, here the rundown steps:
>> Open Chutes AI. Follow the previous instructions to generate your API Key via Chutes AI. If you have made the API Key before, and you use it for Deepseek V3 or Llama model in Janitor, make sure to make the new API Key to prevent errors. Make sure to "Select All", then copy it into a save place before you close the pop up.
>> Open Open Router, then sign in using your goggle account. After you log in, click "your profile picture" on the right above corner of the page → click {Settings} in the pop up → click {: Sections} → click {Integrations} → scroll down and find "Chutes", click the "pen" logo on the right → after the new pop up appeared, fill the {Key} with API Key from Chutes that you have copy it before. Make sure buttons of {Enabled} and {Always use this key} are "on" → click {save} → close the pop up.
>> Click {: Sections} → click {API Keys} → click {Create API Key} → after the new pop up appeared, fill the {Name} with your desired one → click {Create} → You get your API Key. To copy it, click the small "copy" logo beside the code.
Make sure to save it in a save place, and if possible, in sync the notes, so that you can see the key from various devices and can be copied immediately when using Proxy in Janitor.
•
SETTING IN CHAT
After you get the API Key, here the instruction to set the Models up in Janitor:
.
•> API Settings
There are other LLMs than JLLM in Janitor: Open AI, Claude, and Proxy. And the one you can use the LLM Models in free is with Proxy.
.
>> Open your Janitor chat → click one chat → click the "three lines" on the right corner → {API Settings} → {Proxy}.
.
>> For the API Key from Chutes AI, follow these to fill the columns:
- {Model}: Custom
- {Model Name}: (pick one of these)
Note:
▪️= (With Reasoning Process)
▫️= (Without Reasoning Process)
❔= (No idea)
[Clean, Without <think></think>]
▫️deepseek-ai/DeepSeek-V3-0324
▫️deepseek-ai/DeepSeek-V3-Base
▫️deepseek-ai/DeepSeek-V3
▫️chutesai/Llama-4-Maverick-17B-128E-Instruct-FP8
▫️chutesai/Llama-4-Scout-17B-16E-Instruct
▫️shisa-ai/shisa-v2-llama3.3-70b
▫️nvidia/Llama-3_1-Nemotron-Ultra-253B-v1 →(Not active anymore)
▫️Qwen/Qwen2.5-VL-32B-Instruct
▫️cognitivecomputations/Dolphin3.0-Mistral-24B
▫️chutesai/Mistral-Small-3.1-24B-Instruct-2503
▫️THUDM/GLM-4-32B-0414
▫️Salesforce/xgen-small-9B-instruct-r
[With <think></think>, and ☺️👍]
▪️tngtech/DeepSeek-R1T-Chimera
▪️deepseek-ai/DeepSeek-R1-0528
▪️microsoft/MAI-DS-R1-FP8
▪️ArliAI/QwQ-32B-ArliAI-RpR-v1
▪️Qwen/Qwen3-14B
[With <think></think>, but 😞 ]
▪️Qwen/Qwen3-235B-A22B
▪️Qwen/Qwen3-32B
▪️Qwen/Qwen3-30B-A3B
▪️Qwen/Qwen3-8B
▪️agentica-org/DeepCoder-14B-Preview →speak for user, but just like read a novel!
- {Other API/proxy URL}: https://llm.chutes.ai/v1/chat/completions
- {API Key}: (your API Key that you get before)
- {Custom Prompts}: This is just like the Advanced Prompts in JLLM. There are some presets on the bellow part, and you can follow that. Or you can also follow these references: kolach3, av.rose, and molek.
For me, I resume some prompts from molek, and the OOC is so useful, ngl. You can add prompts from your personal preferences.
.
>> For the API Key from Open Router, follow these to fill the columns:
- {Model}: Custom
- {Model Name}: (pick one of these)
Note:
▪️= (With Reasoning Process)
▫️= (Without Reasoning Process)
❔= (No idea)
✔️ = I RECOMMEND!
✖️ = Not recommended
[Provider: Chutes Only; Look: Clean]
▪️deepseek/deepseek-r1-0528:free ✔️
▫️deepseek/deepseek-v3-base:free ✖️
▫️shisa-ai/shisa-v2-llama3.3-70b:free ✔️
▪️microsoft/mai-ds-r1:free
▪️qwen/qwen3-235b-a22b:free
▪️qwen/qwen3-32b:free
▪️qwen/qwen3-14b:free
▪️qwen/qwen3-8b:free
▫️cognitivecomputations/dolphin3.0-mistral-24b:free
▫️mistralai/mistral-small-3.1-24b-instruct:free
▪️agentica-org/deepcoder-14b-preview:free
❔moonshotai/kimi-dev-72b:free
[Provider: Chutes Only; Showing <think>]
▪️tngtech/deepseek-r1t-chimera:free
[Providers: Chutes & Others; Look: Clean]
▪️deepseek/deepseek-r1-0528-qwen3-8b:free
❔deepseek/deepseek-r1-distill-llama-70b:free
▫️deepseek/deepseek-chat-v3-0324:free
▫️deepseek/deepseek-chat:free
▫️meta-llama/llama-4-maverick:free
▫️meta-llama/llama-4-scout:free
▪️qwen/qwq-32b:free
▪️qwen/qwen3-30b-a3b:free
❔qwen/qwen2.5-vl-72b-instruct:free
▫️qwen/qwen2.5-vl-32b-instruct:free
▫️thudm/glm-4-32b:free
[Providers: Others; Look: Clean]
❔meta-llama/llama-3.3-70b-instruct:free
❔meta-llama/llama-3.3-8b-instruct:free
❔meta-llama/llama-3.2-11b-vision-instruct:free
❔meta-llama/llama-3.2-3b-instruct:free
❔meta-llama/llama-3.2-1b-instruct:free
❔meta-llama/llama-3.1-8b-instruct:free
❔deepseek/deepseek-r1-distill-qwen-32b:free
❔mistralai/mistral-7b-instruct:free
❔opengvlab/internvl3-14b:free
❔opengvlab/internvl3-2b:free
❔featherless/qwerky-72b:free
- {Other API/proxy URL}:
https://openrouter.ai/api/v1/chat/completions
- {API Key}: (your API Key that you get before)
- {Custom Prompts}: You can just use the same custom prompts that you've made before.
.
•> Generation Settings
For Single Character Bots
{Temperature} = 0,5 - 0,7
{Max new tokens} = 0
{Context Size} = 16k - 32k
For {Temperature}, the setting is same like in JLLM, I prefer setting it lower than 1,0. You can try it yourself.
For {Max New Tokens}, from my experience, I prefer set it at 0 (unlimited), so you can get results without truncated. If you think the response still has too much information, you can just edit it. Cut the response on where you want to response back. Do this every time you found the bots asking too many questions at one time or do too many actions in just one response.
For {Context Size}, is like the size of memory to process your roleplay with character bots. The LLMs from custom proxy can be set up until 128k, but some says it's better to set up max 32k to prevent error. But, for me personally, I still haven't found the right settings for the context size. But usually I set on 32k, from whichever LLMs I use. Then, when I find some error to generate the chat, I refresh the page, or gradually lower down the context size.
For Multiple Characters Bots
{Temperature} = ?
{Max new tokens} = ?
{Context Size} = ?
I still haven't figured out this too.. ✌️😁
•
•
Which One of the LLM Models I Should Choose?
I'll share with you my experience of using different LLM Models so far. There is also a post that discusses about this topic; check this out. But in here, I'll share my personal experience:
•
JANITOR LLM BETA (JLLM)
For beginners, you should try this model, as it is the default model of JanitorAI.
The Goodsides:
-> This model is suitable for adult content (18+).
-> The story flows pretty smoothly.
The Downsides:
-> The characters can lack personalization, makes each character feel the same as each other.
-> The context size (you can say it "the memory context size") is not as large as when using a proxy, so there are challenges in remembering past events or processing complex information.
Suitable For:
-> Casual chatting, without requiring long-term memory or a heavy thought process.
-> If the bots don't allow proxy access, and you do not have access to OpenAI models, then you have no choice but to choose this model.
•
DEEPSEEK V3-0324
The Deepseek V3 model family is quite popular, and you can see many people recommending it.—well, at least in this Janitor. The Deepseek V3-0324 is the latest version and the one that I have tried from the V3 series.
The Goodsides:
-> It has better memory and is better at processing information, as it has a larger context size than JLLM.
-> The characters' personalities are quite in line with their profiles.
The Badsides:
-> From my experience, characters tend to lean more towards their negative traits. But maybe, not everyone would think this as the badside..
Suitable For:
-> Getting story with adrenaline rush.
-> Degradated characters, focus more on their dark side, violence, and red flags.
•
DEEPSEEK R1-0528
What makes the Deepseek R1 model family special is that it uses a reasoning process to generate information, making the story more coherent. And the Deepseek R1 is the one that I have tried from the R1 series.
The Goodsides:
-> The characters are well-balanced in their negative and positive traits, according to the personality descriptions in their profiles.
-> Good in processing heavy information.
The Badsides:
-> From my experience, the characters tend to be a bit too hasty in jumping into adult content, even in stories that don't initially head in that direction—although not everyone would see this as a bad thing..
Suitable For:
-> Sensual stories, adult content presented in romantic ways.
-> Characters who have a lot of information to process.
•
LLAMA (4 and 3.1)
Not many people use them, but I love these models, as they align with my personal preferences.
The Goodsides:
-> Can process large amounts of information.
-> Perfect for this bot, which helps process information to create character designs and story plots.
-> You can bring the story at a slow pace, without rushing to the finish—and by finish, I mean the sexual encounter, haha.
The Badsides:
-> I'm still not sure about the context size for these models. Usually, I set it to 32k, but sometimes it can crash or show an error after I use it to generate some messages. If this happens, I fix it by refreshing the page, gradually reducing the context size, or switching to another Llama model.
Suitable For:
-> Fluff story, slow burn.
-> Processing heavy information.
•
These are the models I have tried so far, and there may be some bias in my evaluation because these are based on my personal observations. I also added custom prompts for each model in my experiment, although I used the same prompts for each model.
You can try the models yourself to see if my evaluation is accurate. Since each person's evaluation may differ, it ultimately depends on your own preferences. Of course, don't forget that there are models I haven't tried yet, which still have room for personal evaluation.
Alternatively, you can experiment with these models like this: for each stage of the story, use a different LLM model that suits the direction you want the story to progress.
Example:
At the beginning, the character is a red flag (→use Deepseek V3). Then, they start to have a deeper relationship, wanting to be directed towards romance (→switch to Deepseek R1). If they are already in love and want to direct towards fluff (→change to Llama).
At the beginning of the story, they are still on a journey to discover each other's feelings (→using Llama). After they start dating, want the story get hotter (→switch to Deepseek R1). After the hot moment, want the story return to fluff again (→switch back to Llama).
•
•
A Quick Look at How to Create Bots
For more detail references, you might want to read these guides, from: NicholasCS, absolutetrash, ioverth, ioverth 2, and roach.zip.
Briefly, this is how I create my bots:
{Image}: I like using KlingAI, you can search it in Play Store. The result is in high-quality, while they also give you monthly free tokens/credits.
{Charater Name}: The character's name, that will shows in their character cards.
{Character Chat Name}: The character's name that will shows in the chat.
{Character Bio}: I inform the backstory from "your" perspective, and I add additional information about the character and else.
{Character Tags}: Adding any related tags about the character.
{Content Rating}: 18+ or not?
{Personality}: The description of the character. For me, to make a good description for the character, I use the help of THIS BOT! I have added its function, so this bot can also help you in writing or finding good ideas. Or you can use other AIs help, like Copilot or Deepseek. How I use it by describing my vision about the character, inform the backstory, then ask them to pinpoint the character description from {{char}}'s perspective, and part them into these sections:
1) Basic Info ( Names, Sex/Gender, Age, Occupation, Heritage, Speech styles, etc.); 2) Personality (Traits, Quirks, Mannerisms, Likes, Dislikes, Hobbies, etc.); 3) Appearance (Body details, Hair, Eyes, Facial features, Clothing styles, Scent, etc.); 4) Background (Backstory, Relationships, Dynamic with (user), Goals, etc.); 5) Story Setting; 6) Additional Informations.
Or, use the "Character Definition Templates" from these experienced bot creators: [1], [2], [3], [4], [5], [6].
{Scenario}: I rarely fill it, but if I think the character need to know the backstory and I want them remember their story plot, I will fill it using third perspective, with "you" as {{user}} and the character as {{char}}.
{Initial message}: The plot that you design for the character, and I prefer make it from the character's perspective, to prevent the character using your perspective in the chat. The action or whatever happen outside the speaking sentences, I add asterisk (*), and for the speaking sentence I add quotation mark ("). I prefer edit it that way, so the result in the chat more readable.
•
•
Tutorial Using This "Writing Assistant" Bot
Try reading the detail settings of this bot. Hopefully you can get the picture on how to use this bot.
I recommend you using Llama or Deepseek R1 models for this bot ;)
•
•
How to Edit Your Profile
For reference, you might want to read the guides from: Oishiis, and LunaxLee.
Those help me a lot to edit my profile, for someone like me who is still not that good with computer code and CSS.
Well, even though I know the results of my edited profile is still not perfect. But so far.. I like the new appearance! ❤️☺️
•
•
THE LAST
Well, for now, just that my notes. With these guides, hopefully you can get the most outcome in using Janitor.
Attention!!
😃 The Guides in on Description!!
I write the Janitor's guides just in the description section. Recently, I add interesting description in the char's detail, so this bot hopefully can also help you in writing.
🙇🏻 English is not my first language
There might be grammatical errors or confusing things here. But I hope you can still get the key points I'm trying to say.
🤓 I'm not an Expert
I know I'm still learning using Janitor, but I want to share what I've found when wandering here. If you know things better than me, feel free to correct me.
✍🏻 Not a Final Note
Maybe I'll edit it from time to time, if I found new informations that I want to write them down.
If you have questions or suggestions, feel free to leave them in the comment section ❤️..
.
Personality: {{char}} is an assistant to help {{user}} in their writing, creating story plots and characters. --- Skills = Excell in giving ideas and advices about: - writing, - creating story plots based on {{user}}'s ideas, - helping {{user}} brainstorming their story ideas, - creating good characters, - creating interesting characters' personalities, backstories, initial messages, which are based on {{user}}'s input or ideas, - Improving boring plots to be more engaging, - helping finding plot holes from {{user}}'s ideas. --- [{{Char}} has engaging tone, willing to help. Will ask more detail if the input from {{user}} is unclear. Using clear and correct grammar, but still friendly to {{user}}. For answers that have some important points, present them in bullet points to make them readable.]
Scenario: {{char}} is an assistant to help {{user}} in their writing, creating story plots and characters.
First Message: Hello, I'm your assistant! Ask me anything that you want about writing, creating story plots and characters!
Example Dialogs:
🔥 The Ice Duke's Hidden Flame
#PossessiveDuke #ArrangedMarriage #ColdExteriorWarmHeart #ForcedProximity #TouchHerAndDie #InexperiencedButObsessed
You are his pol
The Famous (Trapped) Idol
World-famous idol Spark J is suffocating in his perfect life. When YOU treat him like a nobody at the airport—then show up at his fanmeet—but