This is where a good storyteller would have a blast.
Maybe a mage could heal it, but then they would take on the disability themselves.
Or a magical disability is the result of a 1:1 battle with another magic wielder. Only a being of equal power can cause permanent damage.
The disability is a payment for some rare power. Maybe you lose your eyes but can now see the astral plane and pilot the Event Horizon.
My argument is thus:
LLMs are decent at boilerplate. They’re good at rephrasing things so that they’re easier to understand. I had a student who struggled for months to wrap her head around how pointers work, two hours with GPT and the ability to ask clarifying questions and now she’s rockin’.
I like being able to plop in a chunk of Python and say, “type annotate this for me and none of your sarcasm this time!”
But if you’re using an LLM as a problem solver and not as an accelerator, you’re going to lack some of the deep understanding of what happens when your code runs.