New paper out from my Meta internship on preventing LLM Hallucinations with a model-specific finetuning method.