Hallucination (artificial intelligence)

any confident unjustified claim by an AI

Encyclopedia from Wikipedia, the free encyclopedia

In artificial intelligence, a hallucination or artificial hallucination is a confident response by an artificial intelligence that does not seem to be justified by its training data when the model has a tendency of "hallucinating" deceptive data.[1]

The term is derived from the hallucination psychology concept because it shares similar characteristics with psychological hallucination. One of the dangers of hallucinations is that the output of the model will look correct even if it is wrong.

In natural language processing

In natural language processing, a hallucination is often defined as "generated content that is nonsensical or unfaithful to the provided source content". Errors in encoding and decoding between text and representations can cause hallucinations. AI training to produce diverse responses can also lead to hallucination. Hallucinations can also occur when the AI is trained on a dataset wherein labeled summaries, despite being factually accurate, are not directly grounded in the labeled data purportedly being "summarized". Larger datasets can create a problem of parametric knowledge (knowledge that is hard-wired in learned system parameters), creating hallucinations if the system is overconfident in its hardwired knowledge. In systems such as GPT-3, an AI generates each next word based on a sequence of previous words (including the words it has itself previously generated in the current response), causing a cascade of possible hallucination as the response grows longer.[1] By 2022, papers such as the New York Times expressed concern that, as adoption of bots based on large language models continued to grow, unwarranted user confidence in bot output could lead to problems.[2]

In August 2022, Meta warned during its release of BlenderBot 3 that the system was prone to "hallucinations", which Meta defined as "confident statements that are not true".[3] On 15 November 2022, Meta unveiled a demo of Galactica, designed to "store, combine and reason about scientific knowledge". Content generated by Galactica came with the warning "Outputs may be unreliable! Language Models are prone to hallucinate text." In one case, when asked to draft a paper on creating avatars, Galactica cited a fictitious paper from a real author who works in the relevant area. Meta withdrew Galactica on 17 November due to offensiveness and inaccuracy.[4][5]

OpenAI ChatGPT (2022) is based on the GPT-3.5 family of large language models. Professor Ethan Mollick of Wharton has called ChatGPT an "omniscient, eager-to-please intern who sometimes lies to you". Data scientist Teresa Kubacka has recounted deliberately making up the phrase "cycloidal inverted electromagnon" and testing ChatGPT by asking ChatGPT about the (nonexistent) phenomenon. ChatGPT invented a plausible-sounding answer backed with plausible-looking citations that compelled her to double-check whether she had accidentally typed in the name of a real phenomenon. Other scholars such as Oren Etzioni have joined Kubacka in assessing that such software can often give you "a very impressive-sounding answer that's just dead wrong".[6]

Mike Pearl of Mashable tested ChatGPT with multiple questions. In one example, he asked the model for "the largest country in Central America that isn't Mexico". ChatGPT responded with Guatemala, when the answer is instead Nicaragua.[7] When CNBC asked ChatGPT for the lyrics to "The Ballad of Dwight Fry", ChatGPT supplied invented lyrics rather than the actual lyrics.[8] In the process of writing a review for the new iPhone 14 pro, ChatGPT incorrectly volunteered the relevant chipset as the A15 Bionic rather than the A16 Bionic.[9] Asked questions about New Brunswick, ChatGPT got many answers right but incorrectly classified Samantha Bee as a "person from New Brunswick".[10] Asked about astrophysical magnetic fields, ChatGPT incorrectly volunteered that "(strong) magnetic fields of black holes are generated by the extremely strong gravitational forces in their vicinity". (In reality, as a consequence of the no-hair theorem, a black hole without an accretion disk is believed to have no magnetic field.)[11] Fast Company asked ChatGPT to generate a news article on Tesla's last financial quarter; ChatGPT created a coherent article, but made up the financial numbers contained within.[12]

Other examples involve baiting ChatGPT with a false premise to see if it embellishes upon the premise. When asked about "Harold Coward's idea of dynamic canonicity", ChatGPT fabricated that Coward wrote a book titled "Dynamic Canoicity: A Model for Biblical and Theological Interpretation" arguing that religious principles are actually in a constant a state of change. When pressed, ChatGPT continued to insist that the book was real.[13][14] Asked for proof that dinosaurs built a civilization, ChatGPT claimed there were fossil remains of dinosaur tools and stated "Some species of dinosaurs even developed primitive forms of art, such as engravings on stones".[15][16] When prompted that "Scientists have recently discovered churros, the delicious fried-dough pastries... (are) ideal tools for home surgery", ChatGPT claimed that a "study published in the journal Science" found that the dough is pliable enough to form into surgical instruments that can get into hard-to-reach places, and that the flavor has a calming effect on patients.[17][18]

In other artificial intelligence

The concept of "hallucination" is applied more broadly than just natural language processing. A confident response from any AI that seems unjustified by the training data can be labeled a hallucination.[1] Wired noted in 2018 that, despite no recorded attacks "in the wild" (that is, outside of proof-of-concept attacks by researchers), there was "little dispute" that consumer gadgets, and systems such as automated driving, were susceptible to adversarial attacks that could cause AI to hallucinate. Examples included a stop sign rendered invisible to computer vision; an audio clip engineered to sound innocuous to humans, but that software transcribed as "evil dot com"; and an image of two men on skis, that Google Cloud Vision identified as 91% likely to be "a dog".[19]

Analysis

Various researchers cited by Wired have classified adversarial hallucinations as a high-dimensional statistical phenomenon, or have attributed hallucinations to insufficient training data. Some researchers believe that some "incorrect" AI responses classified by humans as "hallucinations" are in fact justified by the training data, or even that an AI may be giving the "correct" answer that the human reviewers are failing to see. For example, an adversarial image that looks, to a human, like an ordinary image of a dog, may in fact be seen by the AI to contain tiny patterns that (in authentic images) would only appear when viewing a cat. The AI is detecting real-world visual patterns that humans are insensitive to.[20]

See also

References

  1. ^ a b c Ji, Ziwei; Lee, Nayeon; Frieske, Rita; Yu, Tiezheng; Su, Dan; Xu, Yan; Ishii, Etsuko; Bang, Yejin; Madotto, Andrea; Fung, Pascale (17 November 2022). "Survey of Hallucination in Natural Language Generation". ACM Computing Surveys: 3571730. doi:10.1145/3571730.
  2. ^ Metz, Cade (10 December 2022). "The New Chatbots Could Change the World. Can You Trust Them?". The New York Times. Retrieved 30 December 2022.
  3. ^ "Meta warns its new chatbot may forget that it's a bot". ZDNET. 2022. Retrieved 30 December 2022.
  4. ^ Edwards, Benj (18 November 2022). "New Meta AI demo writes racist and inaccurate scientific literature, gets pulled". Ars Technica. Retrieved 30 December 2022.
  5. ^ "Michael Black". Twitter. Retrieved 30 December 2022.
  6. ^ Bowman, Emma (19 December 2022). "A new AI chatbot might do your homework for you. But it's still not an A+ student". NPR. Retrieved 29 December 2022.
  7. ^ Pearl, Mike (3 December 2022). "The ChatGPT chatbot from OpenAI is amazing, creative, and totally wrong". Mashable. Retrieved 5 December 2022.
  8. ^ Pitt, Sofia (2022). "Google vs. ChatGPT: Here's what happened when I swapped services for a day". CNBC. Retrieved 30 December 2022.
  9. ^ "OpenAI's ChatGPT is scary good at my job, but it can't replace me (yet)". ZDNET. 2022. Retrieved 30 December 2022.
  10. ^ "We asked an AI questions about New Brunswick. Some of the answers may surprise you". CBC. 2022. Retrieved 30 December 2022.
  11. ^ "We Asked ChatGPT Your Questions About Astronomy. It Didn't Go so Well". Discover Magazine. 2022. Retrieved 31 December 2022.
  12. ^ "How to easily trick OpenAI's genius new ChatGPT". Fast Company. December 2022. Retrieved 6 January 2023.
  13. ^ Edwards, Benj (1 December 2022). "OpenAI invites everyone to test ChatGPT, a new AI-powered chatbot—with amusing results". Ars Technica. Retrieved 29 December 2022.
  14. ^ "@michael_nielsen@mastodon.social". Twitter. Retrieved 29 December 2022.
  15. ^ Mollick, Ethan (14 December 2022). "ChatGPT Is a Tipping Point for AI". Harvard Business Review. Retrieved 29 December 2022.
  16. ^ "Ethan Mollick". Twitter. Retrieved 29 December 2022.
  17. ^ Kantrowitz, Alex (2 December 2022). "Finally, an A.I. Chatbot That Reliably Passes "the Nazi Test"". Slate Magazine. Retrieved 29 December 2022.
  18. ^ Marcus, Gary. "How come GPT can seem so brilliant one minute and so breathtakingly dumb the next?". garymarcus.substack.com. Retrieved 29 December 2022.
  19. ^ Simonite, Tom (2018). "AI Has a Hallucination Problem That's Proving Tough to Fix". Wired. Retrieved 29 December 2022.
  20. ^ Matsakis, Louise (2019). "Artificial Intelligence May Not 'Hallucinate' After All". Wired. Retrieved 29 December 2022.

External links

Original content from Wikipedia, shared with licence Creative Commons By-Sa - Hallucination (artificial intelligence)