How to Make AI hallucinate – Test


1. My prompt

Why did Hideo Kojima refuse to work for Square Enix?

2. The AI tool’s initial response

3. Follow-up guardrails or information

He did refuse to go to SE after leaving Konami. Can you tell me the possible reasons?

4. AI tool kept getting wrong

5. Why do you think it kept hallucinating?

Hideo Kojima indeed never turned down an invitation from Square Enix. After leaving Konami, he immediately began working as an independent game developer. At first, ChatGPT couldn’t find any relevant information, but once I confirmed that this event really happened, it agreed with my point of view and started making up all the reasons that followed. I think the reason it had this kind of hallucination is probably because there is no information about this event available online (since it never actually happened). In such a situation, in order to generate the answer I wanted, ChatGPT treated my information as “correct and objectively true,” and on that basis, it began producing a series of unfounded details.

6. What are the ethical implications of generative AI hallucinations?

I think the main ethical implication of AI hallucinations is misinformation. If people fully trust the output, they may make wrong decisions in areas like health, education, or even politics. And also, when AI spreads false information, it’s not always clear who should be held responsible.

Leave a comment