News How To Create Your Own AI Chatbot Server With Raspberry Pi 4

Status
Not open for further replies.

bit_user

Titan
Ambassador
For This Project You Will Need
  • Raspberry Pi 4 8GB
  • PC with 16GB of RAM running Linux
  • 16GB or larger USB drive formatted as NTFS
I'm sure the Pi will also need to be running the 64-bit version of the OS. I don't know if people with the 32-bit OS would've gotten automatically upgraded, but this is probably still worth pointing out.

As for the USB drive using NTFS, this surprised me. IMO, the only reason to use NTFS is if you need > 4 GB files and require accessibility from a Windows PC. Otherwise, I'd use Linux-native filesystems XFS or BTRFS for > 4 GB file support.
 
Apr 5, 2023
1
1
10
I was able to get this working, but I had to use some different directions. I am brand new to AI and a noob with python, so this could have easily been caused by something I did wrong.

Step 7: I was not able to download the torrent. I ended up following these instructions from https://github.com/juncongmoo/pyllama:
pip install pyllama -U
pip install transformers
python3 -m llama.download --model_size 7B
Note: I use Linux Mint and had to turn off my firewall for the previous step. Otherwise it hung at 12.7GB indefinitely.

Step 12: There was not quantize.py file in my llama.cpp directory. I read through the README file and found this which worked:
./quantize ./models/7B/ggml-model-f16.bin ./models/7B/ggml-model-q4_0.bin 2

I am running this on my Linux Mint machine, not a Raspberry Pi, so I was able to skip all of those steps.

To open the chat, I used the command ./examples/chat.sh

I hope these notes help anyone else who get stuck.
Thank you for this tutorial. I am looking forward to playing with the chat.
 
  • Like
Reactions: Khoo
Aug 15, 2024
1
0
10
Hi, seeing how you got around the problems to get a solution, I will take the liberty of asking you for help if you allow me. I am stuck at step 11. When I enter the command to convert the model 7B files to ggml F16 Format, it tells me that there is no such file or directory in llama.cpp where the 7B model file is located .
 
Status
Not open for further replies.