Ends in
00
days
00
hrs
00
mins
00
secs
SHOP NOW

🚀 25% OFF All Practice Exams, Video Courses, & eBooks – Cyber Sale Extension!

Find answers, ask questions, and connect with our
community around the world.

Home Forums AWS AWS Certified AI Practitioner AIF-C01 Question Inaccuracy – not understanding what "Hallucinations" are

  • Question Inaccuracy – not understanding what "Hallucinations" are

  • Henry C

    Member
    November 13, 2024 at 1:10 pm

    Question: An AI specialist uses a large language model (LLM) for text generation. During testing, the model produces outputs unrelated to the input prompts, such as generating a recipe when asked about historical events.

    What issue is the model experiencing?

    According to the solutions, the answer is “Hallucination”, but an experienced ML engineer will know this is not true. If you use LLMs a lot you would know its impossible even for homemade models less than 1B params to “generate recipes when asked about historical events”, LLMs are trained to predict the next token, unless the model is fine-tuned on prompt-response pairs that specifically have nothing to do with each other, an LLM will not produce these results, especially a large one on bedrock. Hallucination occurs when the model is forced to predict the next token and there simply isn’t data in the training data, for example asking about very specific case law that’s not present in the training data.

    At first, I thought the answer may be underfitting but its not correct either.

    If you underfit while training a LLM, the output would typically be more simplistic and overly generalized rather than completely incoherent or switching topics like “generating a recipe when asked about historical events”.

    I would prefer to keep questions accurate and remove this question for now until you have a better grasp of what LLM hallucination actually looks like.

    • This discussion was modified 4 weeks, 1 day ago by  Henry C.
  • JR-TutorialsDojo

    Administrator
    November 14, 2024 at 12:31 pm

    Hello Henry C,

    Thank you for your thoughts on this item.

    You’ve raised a valid point. It is unusual for a well-trained large model to generate completely unrelated responses, like recipes, when asked about historical events.

    We will make the necessary updates, which should be reflected on the portal soon.

    Let us know if you need further assistance.

    Regards,
    JR @ Tutorials Dojo

Viewing 1 - 2 of 2 replies

Log in to reply.

Original Post
0 of 0 posts June 2018
Now
Skip to content