johanpmeert
Prominent
Yeah, it's broken and that after only a few days.Hello,
Anyone have idea what caused this error? I tried reinstalling everything but I always get to the end and get this error.
(llama4bit) PS C:\AIStuff\text-generation-webui> python server.py --gptq-bits 4 --model llama-7b
Loading llama-7b...
Traceback (most recent call last):
File "C:\AIStuff\text-generation-webui\server.py", line 241, in <module>
shared.model, shared.tokenizer = load_model(shared.model_name)
File "C:\AIStuff\text-generation-webui\modules\models.py", line 101, in load_model
model = load_quantized(model_name)
File "C:\AIStuff\text-generation-webui\modules\GPTQ_loader.py", line 56, in load_quantized
model = load_quant(str(path_to_model), str(pt_path), shared.args.gptq_bits)
TypeError: load_quant() missing 1 required positional argument: 'groupsize'
(llama4bit) PS C:\AIStuff\text-generation-webui>
If you have idea how can I fix this please let me know.
I tried editing line 56 in the python code and just added another parameter which clearly is a integer number. It seems that the parameter must be a power of 2 so you can enter 1, 2, 4, 8,16 and so on. Then that error disappears but the model then has errors loading/coping values so it seems it's not that easy to fix.