@Jack1230 chatGPT 的回答本来就没有可信度啊 OpenAI 在发布 GPT-4 的 blog 里有一章 Limitation 专门提到这个: Despite its capabilities, GPT-4 has similar limitations as earlier GPT models. Most importantly, it still is not fully reliable (it “hallucinates” facts and makes reasoning errors). Great care should be taken when using language model outputs, particularly in high-stakes contexts, with the exact protocol (such as human review, grounding with additional context, or avoiding high-stakes uses altogether) matching the needs of a specific use-case.[1]