A downloadable client for Windows and Linux

Download NowName your own price

Anyboty is an LLM-based chatbot client. It does not require an internet connection after first time setup outside of optional updates. It does not require an API key and all data is kept inside the client and can only be shared if you share the save files yourself. You're talking to your computer, no one else's. There's no rules, no censorship, and zero oversight on what you do within the client.

FAQ

"What does this do? What can I do with it?"

Anyboty lets you create characters based on your favorite franchises, or stories. Or even make bots that can help you with your Spanish homework, or that can roleplay fun scenarios with you. Want to have a fantasy adventure, but don't have a Game Master? There's one included with this client. Make a character based on your favorite celebrity and go on a virtual date, or recreate the personalities of long dead philosophers to ask for their insight, or see how they might interact with other historic figure.

"How do I use it?"

First, you'll need to download the client and run the launcher. Download the latest update, and install a model from the models browser in the launcher. then press Start Client to start the client. It'll load one of the models you download via the model manager.

"What does it run on?"

Windows and Linux only currently. No phones.

"Who is this for? I already use oogabooga/koboldcpp/etc..."

If you're savvy enough to tinker and you already know your way around Github and python, this might not be for you but don't be dissuaded entirely. This client only supports GGMLv3 models currently, with no current intention for backwards compatibility with older GGML models.

I'm also not currently planning to add support for GPTQ/exLLaMA either as GGML is starting to catch up in speed and I'm not entirely sure it's worth it. But that might change over time.

Currently, other clients exist that are compatible with GPTQ/exLLaMA, however you need to be able to fully load the model into VRAM to really use those, and you have to have a somewhat modern NVIDIA GPU. (Do please let me know if this is out of date and I'll remove it.) If you're okay with following a lot of install instructions, have a fast NVIDIA GPU with a lot of VRAM (You would need roughly 8GB~ for 7B, 12GB~ minimum for a 13B model, and 24GB~ for a 33B model.)and like playing with the fastest models, or non-LLaMA models, then you might want to explore other options. But if you want a smooth experience that will work on almost anything you run it on with zero setup, fuss or knowledge required this client might be for you.

"Where do I find Models? Do I have to do any file management or researching for where to find models?"

You can find most compatible models in the launcher's included model browser. This will search Huggingface for compatible models via their API, and automatically filter out incompatible listings over time as you use it. If you see a model disappear when you clicked on it in the model browser, this is because the model was deemed to not have any compatible models, or the model files were not correctly named to signify what their specs were.

"My anti-virus is saying..."

First, please read this: https://github.com/godotengine/godot/issues/45563

I have no idea if this is purely a Godot issue, but that's what I found on it when people testing the pre-release builds reported that same issue. Either trust it or don't, that's your call. This is also why I've split the Linux and Windows versions despite them sharing 95% of the same files between them, for some reason the Linux build sometimes disagrees with 7zip sometimes. No idea why, and it's not consistent on each build, so your guess is as good as mine on that.

"I have a bug to report/feedback to give!"          

You can do both at the Official Website.

Installation instructions (Windows 10/11)

  1. If you have an NVIDIA GPU, please install CUDA Toolkit 12.1.1 for Windows
  2. Navigate to the Itch.io page and download the latest windows launcher.
  3. Extract the archive wherever you'd like the client to be sorted and run Anyboty-Launcher.exe
  4. Click download to download the latest update.
  5. Click Models and download a model. If you're not sure what you can handle, try starting with a 7B model, and pick q4_0 for the quant type.
  6. Click download for whichever model you settle on.
  7. Go back to the main menu of the launcher with the X at the top left
  8. Click start
  9. Welcome to Anyboty Client, please enjoy. If you have any bugs, please report them over on the Reports page. Please make sure to include your OS and client version.

Installation instructions (Debian Linux)

  1. If you have an NVIDIA GPU, please install CUDA Toolkit 12.1.1 for Linux. If you have an AMD GPU, please download and install Pytorch for AMD ROCm 5.4.2 and any other associated AMD drivers required(I have not tested this, please let me know if it works or not)
  2. Please make sure Python 3.10 is installed. Newer versions might work but could cause unknown complications. If you experience any strange Python issues please try downgrading to Python 3.10
  3. Navigate to the Itch.io page and download the latest linux launcher
  4. Extract the archive wherever you'd like the client to be sorted and run Anyboty-Launcher.x86_64
  5. Click download to download the latest update.
  6. Click Models and download a model. If you're not sure what you can handle, try starting with a 7B model, and pick q4_0 for the quant type.
  7. Click download for whichever model you settle on.
  8. Go back to the main menu of the launcher with the X at the top left
  9. Click start
  10. Welcome to Anyboty Client, please enjoy. If you have any bugs, please report them over on the Reports page. Please make sure to include your OS, client version, and the output of the "uname -a" command in your report.

Theoretical Minimum Requirements

Expected Performance

Not great, but it should work. You'll be stuck with the smallest model size (7B) unless you close everything and use every byte of RAM you might be able to squeeze in a 13B model. But I wouldn't recommend it. You'll be waiting a while for responses, and it might not be worth it. At this time this is more an Experimental

  • OS: Windows 10/11 or Debian Linux
  • RAM: 16GB
  • CPU: 4 Total Cores with 2 threads assigned
  • GPU: None
  • Recommended Model Type: 7B(q4_0, maybe q3_K_M but quality is abysmal at that point)

Recommended Minimum Requirements

This is what I'd recommend if you want to use the client and have a decent experience without needing a super expensive computer. You'll be able to run the 7B model with no issues, and you'll be able to run the 13B model with a little bit of tweaking. You'll be able to run the 33B model, but it'll be slow. If you're okay with waiting a few minutes for a response, this is the minimum I'd recommend.

  • OS: Windows 10/11 or Debian Linux
  • RAM: 32GB
  • CPU: 4 Total Cores with 4 threads assigned
  • GPU: NVIDIA GPU with 8GB VRAM
  • Recommended Model Type: 13B(q4_0) with around 28~ layers offloaded

Recommended Specs

This is what I'd recommend if you have a powerful computer already and want to go all out. You'll be able to run the 7B model with no issues, and you'll be able to run the 13B model with no issues. You'll be able to run the 33B model, with absolutely no issues. 65B might still be a bit slow depending on your GPU, but it should be usable if you're okay with waiting a few minutes for a response at full context length.i

  • OS: Windows 10/11 or Debian Linux
  • RAM: 64GB
  • CPU: 8 Total Cores with 6 threads assigned
  • GPU: NVIDIA GPU with 24GB VRAM
  • Recommended Model Type: 33B(q4_0) with around 63 layers offloaded

Note: Another version of CUDA will not work with the prebuilt wheels, if you need to build for your CUDA Toolkit version, please read the included commands.txt for instructions.

Experimental AMD Support (Linux Only Currently)

If you're trying to use an AMD card on Linux, set torch_type in settings.json to "amd". This will enable a special "amd" GPU mode I've not tested myself. Let me know if it doesn't work.

Download

Download NowName your own price

Click download now to get access to the following files:

Anyboty Launcher - Linux 25 MB
Anyboty Launcher - Windows 24 MB

Development log

Leave a comment

Log in with itch.io to leave a comment.