Has anyone had success finetuning an already fine tuned model such as Synthia-13B or any other model. Lora or Qlora with llama.cpp or Axolotl?

I have very mediocre results. Dataset is 1700 examples, prompts formatted correctly. It hallucinates and does not retain much, if it works at all (llama.cpp is completely broken for me).

What chat tuned model is best for further fine tuning?

I tried also Zephyr-7b-beta just with peft library. Very similar results.

Any guides or pointers? Or someone willing to help?