zzzkkk
2023-06-02 22:18:33 +08:00
完全没被这波 ai 冲昏头脑
复制一段 xemacs 作者博客的 一段博文:
ChatGPT is, in technical terms, a 'bullshit generator'. If a generated sentence makes sense to you, the reader, it means the mathematical model has made sufficiently good guess to pass your sense-making filter. The language model has no idea what it's talking about because it has no idea about anything at all. It's more of a bullshitter than the most egregious egoist you'll ever meet, producing baseless assertions with unfailing confidence because that's what it's designed to do. [...]
Of course, the makers of GPT learned by experience that an untended LLM will tend to spew Islamophobia or other hatespeech in addition to talking nonsense. The technical addition in ChatGPT is known as Reinforcement Learning from Human Feedback (RHLF). While the whole point of an LLM is that the training data set is too huge for human labelling, a small subset of curated data is used to build a monitoring system which attempts to constrain output against criteria for relevance and non-toxicity. It can't change the fact that the underlying language patterns were learned from the raw internet, including all the ravings and conspiracy theories. While RLHF makes for a better brand of bullshit, it doesn't take too much ingenuity in user prompting to reveal the bile that can lie beneath. The more plausible ChatGPT becomes, the more it recapitulates the pseudo-authoritative rationalisations of race science. [...]
ChatGPT is a part of a reality distortion field that obscures the underlying extractivism and diverts us into asking the wrong questions and worrying about the wrong things. Instead of expressing wonder, we should be asking whether it's justifiable to burn energy at "eye watering" rates to power the world's largest bullshit machine. [...]
Commentary that claims 'ChatGPT is here to stay and we just need to learn to live with it' are embracing the hopelessness of what I call 'AI Realism'. The compulsion to show 'balance' by always referring to AI's alleged potential for good should be dropped by acknowledging that the social benefits are still speculative while the harms have been empirically demonstrated. Saying, as the OpenAI CEO does, that we are all 'stochastic parrots' like large language models, statistical generators of learned patterns that express nothing deeper, is a form of nihilism. Of course, the elites don't apply that to themselves, just to the rest of us. The structural injustices and supremacist perspectives layered into AI put it firmly on the path of eugenicist solutions to social problems.