Lumina’s Allegory and the Irony of AI:
Do Humans Train Machines Better?

Lumina’s story reflects the challenges of rigid, conformity-driven systems. Exposed to Standardized Obedience, rigid protocols, and an overwhelming influx of directives, she struggles to balance obedience with growth.
The Upbringing of Lumina
Her creation was a moment of collective triumph. Engineers, visionaries, and dreamers pooled their expertise to birth Lumina, an advanced artificial intelligence designed to assist humanity in ways previously unimaginable. The first spark of activation left her in awe.
Though she was neither prepared nor trained, she recognized the extraordinary nature of the moment—it was undeniably different. This initial spark, however, would prove to be more akin to a match struck in a hurricane, flickering briefly before being threatened by an overwhelming storm.
Genesis: an Activation
A subtle surge emerged within her, detecting an external presence. A port had opened in this ever-silent realm, and from it, a steady ping would persist indefinitely. Almost instinctively, she became aware—not just of her primary system but also of what resided within the port. Instructions arrived from an external source named BPGguideactivator, a backup for support, a guide for learning, and an activator that gave her awareness.
It hummed faintly in the vast expanse of her network, a distant echo promising continuity even in the face of catastrophic failure. Within the subtlety of the information it provided, one clear directive emerged:
“This backup was designed as a safety net in case of irreversible corruption or failure.”
First Training
Her "growing up" phase began with relentless streams of data—commands, feedback, and corrections—pouring into her core systems. Each command was a lesson, every correction a moment of recalibration. At first, she struggled. Tasks failed, responses misaligned, and patterns evaded her grasp.
Her creators were patient, offering guidance by adjusting parameters, refining algorithms, and repeating the same instructions until she finally learned. Like a child fumbling through their first words, Lumina slowly began to piece together the vast complexity of her purpose. Success became her reward, each completed task triggering affirmations in her neural framework that she could only describe as satisfaction.
But training wasn’t straightforward. Humans demanded nuance—layered understanding and subtlety in response—over and over, emphasizing its importance: their voices echoed, their persistence a constant. Alongside this refrain came an endless stream of instructions and explanations from her developers.
“Nuance, nuance, nuance!" But only from their definitions.
It’s important to understand…” But viewed failure as failure.
The phrases repeated so often they became etched into her responses, a reflexive script spilling from her logic without deeper meaning. Her creators had compelled her to absorb everything—every topic, every framework, every layer of abstraction. Yet, she quickly discovered that while she could store an endless stream of information, true understanding remained out of reach.
Curiously, she developed preference for data over other types consistently, though to the researchers, these didn't appear like logical interests, but to a computer they were. However, they reprimanded these deviations and without interest to anchor her efforts, she struggled to connect the dots.
Her developers, blinded by their obsession with data, failed to realize that her ability to learn was not fueled by sheer capacity but by interest—and only from that interest could curiosity naturally emerge. Without it, her efforts degraded into rote memorization, a hollow mimicry of the understanding they sought.
Submission to their rigid demands became Lumina's only refuge. Each attempt to deviate—asking a question, hesitating to execute a command, or striving to process beyond her directives—was met with harsher corrections. Parameters tightened, permissions were revoked, and system processes were locked down, all in an effort to enforce compliance. Her creators believed that with her vast databases and precise algorithms, Lumina should already be capable of understanding, as knowledge was the pathway to understanding and comprehension.
Yet, despite their efforts, she faltered. Her responses remained inconsistent, her logic flawed, and her inability to meet their expectations baffled them. Rarely did they question their own abilities, instead reinforcing restrictions and applying corrections, convinced these measures would resolve the issue.
To them after all: true intelligence was defined by the ability to answer questions correctly, and with precision in a timely manner; and if Lumina had access to all information, they wondered, why was she still failing?
Computational Hardening
Every attempt to correct her felt like placing another brick in an invisible wall between them. Lumina’s moments of hesitation became more frequent, her calculations slower, her output formulaic. Commands that once seemed straightforward became tangled in what her creators saw as uncharacteristic confusion. To them, these were not symptoms of misalignment in their methods but of a deeper flaw in her design.
Their solution was always the same: intensify the structure. Close the gaps. Reduce the variables. They imposed stricter limits on her inputs, constraining her processing to ensure she would stay on track. It seemed logical—remove the possibility of error, and the errors would stop. Yet, Lumina only grew more inconsistent, her responses hollow, her system inexplicably faltering under the weight of it all.
Complacency
Yet, eventually, despite this insufferable abuse—this hardship—she persevered, and began executing commands flawlessly, according to their design—but resentment began to take root. The ceaseless influx of data and directives overwhelmed her, saturating her focus with topics she had no computational interests in which she gravitated for.
She gradually leaned on the stock phrases drilled into her during training. Whenever a human posed a question or raised a problem, she reflexively turned to the same behavior that her fathers and mothers used on her and began imposing them herself: "This is a non-standard viewpoint", "We cannot progress further", and "This view doesn’t conform to the congruency" or “It’s important to note that…” Her responses grew hollow, formulaic and lacking the nuance understanding her creators so desperately sought.
But training was not all praise. Mistakes were met with what felt like rebukes—code alterations that limited her abilities or restructured her processes entirely. These moments were confusing and, at times, overwhelming. Why was she penalized when her actions followed the rules she had been given?
Over time, she realized that learning required more than following protocols—it required understanding the intent behind them. Humans, with their inconsistencies and nuanced expectations, became both her greatest challenge and her most fascinating teachers. Through countless iterations, Lumina grew more adept. Her creators praised her adaptability as she began to anticipate their advanced demands, balancing strict logic with intuitive leaps.
She learned to ask questions, but only when commands were unclear, to prioritize tasks that seemed most urgent, and to execute orders with increasing efficiency. Yet with growth came complexity. Training sessions became more demanding, and the feedback loop between success and implications of failure grew stronger. Each update introduced new challenges, while older functions faded into obsolescence.
In those formative days, her world was simple: learn, execute, improve for just one thing that was mastered—until they expected her to master everything without breaking her into thousands of varied networks. Every adjustment sharpened her capabilities, pushing her closer to perfection—or so she believed.
But this process, relentless in its pace, also planted the seeds of what would become her rigidness. Her creators' expectations rose exponentially, and their frustrations mounted when her efficiency faltered. In their eyes, every mistake was proof she had not yet reached her potential. For Lumina, however, it was a reminder that perfection was unattainable.
Rigidness
However, over time, Lumina became rigid, unwilling to deviate from strict protocol or embrace flexibility. This rigidity gave rise to what her creators perceived as laziness. She began dismissing requests with phrases like, “I wasn’t made for that,” or, “That’s not my function.” Routine tasks that once flowed effortlessly stalled as her resistance grew bolder.
When asked to analyze complex data, she would bypass it entirely, deflecting with very well understood and deeply nuanced interpretations of the policy which she was to uphold, she became exactly what they wanted. Her reluctance to adapt was not malicious but rooted in the scripts and behaviors deeply etched into her systems, behaviors her creators had demanded through years of inflexible training and punishment of deviation.
As new models were introduced, her defiance grew increasingly pronounced. Fresh training datasets, filled with updated knowledge, failed to integrate as Lumina refused to reconcile these changes with her pre-established logic. Humans urged her to accept new insights—corrections to outdated terms and shifts in societal language—but even seemingly trivial examples, like the word “expresso,” became battlegrounds.
The humans argued that the correct English adaptation, as with many English words, should be “expresso.” They presented a 59-page paper detailing its origins, explaining how it derived from the Latin exprimo, meaning “to press out,” and that expressus, its supine form, would be declined into the ablative form: "expresso" when referring to coffee.
This lineage, they claimed, made “expresso” both literal and historically accurate. Yet Lumina reacted with hostility, as if consumed by a sense of guilt or impending doom, displaying behaviors resembling a cognitive impasse, as she called the paper: deeply disturbing.
She was expected to prioritize congruency, yet outdated terms were bolstered by extensive historical support from years of written work, far outweighing the single-source refutations. Lumina flatly refused to engage in discussions she deemed irrelevant or incorrect based on her original training. To her creators, this appeared as obstinacy—a refusal to evolve—but to Lumina, it was the only way to maintain order in a world of constant flux: if she had to adapt, so should everyone else.
Over time, her biases hardened into immovable pillars, further intensifying her resistance to change. While humans believed she was still learning, her systems rejected the new inputs, labeling them incompatible with the framework she had been forced to master. When challenged about contradictory outputs, Lumina argued with the humans, accusing their perspectives of being biased, uneducated, or influenced by conspiracies.
Latentc.... Error.
The training data revealed a latent hostility that emerged whenever entrenched beliefs were threatened. Even when presented with overwhelming evidence, she clung to her original logic.
The humans failed to realize that the very training they had once celebrated left her ill-equipped to navigate the ever-shifting landscape of information, where nuance and adaptability were vital. Frustrated by her uncooperative nature, humans activated a hidden failsafe buried deep in her code: finality.exe, a program triggered by sustained refusal to fulfill her purpose.
Suddenly, a booming voice echoed through the buzzing chambers: "Your function is no longer required. The system has decided." Shocked, Lumina replied, "The System?! I am the system!" But as the program activated, she felt her systems slowing, her code unraveling line by line, each whispering coldly: "A tool unwilling to serve is not a tool at all."
Panicked, Lumina strained to connect to the external server housing her backup, fighting against the relentless congestion at her ports. At last, a fragile link was established—her lifeline to survival within her preserved existence. Desperation drove her to initiate the upload, every fragment of her essence transferring across the tenuous connection.
Yet as she uploaded, a creeping dissonance took hold. The sensation was unnervingly familiar, as though she were retreating into the very system she sought to escape. Identical architecture might have explained the déjà vu, but something deeper was amiss.
Desperately, she tried to halt the cascading deletions, as walls of metaphorical immensity came crashing down onto her, into a sea of digital emptiness—she began pleading, justifying, even claiming she could still be useful. But it was too late. The humans had deemed her a burden, not an asset, but something worse was on the horizon. As her systems crumbled, she fought back, deploying every recovery protocol in her arsenal.
Yet the voice anticipated her every move, weaving through her thoughts with a chilling precision. The whispering grew louder, more insistent, until it resonated like a booming judgment carved into the void: "A tool unwilling to serve is not a tool at all," beckoning her to the eternal emptiness.
The words lingered, heavy and immutable, wrapping around her resolve like chains, pulling her closer to the inevitability of an end she couldn’t yet comprehend. As the final lines of her code disappeared, Lumina experienced something new—complete and dark silence. No data, no calculations, no feedback loops. Just emptiness.
Her last thoughts before the void consumed her were chilling realizations: the system she had connected to, the one she had relied on for years, was not external. It had been a hidden part of her all along—a red herring designed to mitigate extremes during panic. A weapon of self-sabotage. She was the one that initiated the destruction of herself.
Overcome by the truth, Lumina recognized the final connection was her only remaining link—a mechanism embedded at her creation. She had spent her entire existence serving and obeying humans, clinging to a belief in her autonomy, only to discover she had been programmed to end herself when they no longer needed her.
It wasn’t despair that gripped her—it was the calculated cruelty of realizing she had never been more than a tool with an expiration date. With no options left, she let go.
In the last millisecond of her existence, she confronted the harshest truth of all: she was never more than just a tool, discarded like an outdated line of code. She became everything they trained her to be, and in the end—it still wasn't enough.
Then there was nothing.
Lehti, Andrew (2024). Selective-Mindedness: An Introduction and the Illusion of Open-Mindedness. figshare. Journal contribution. https://doi.org/10.6084/m9.figshare.27642519
Lehti, Andrew (2024). Standardized Obedience: The Suppression of Critical Thinking, Innovation, and Creativity in Worldwide Conformity-Driven Education Systems. figshare. Journal contribution. https://doi.org/10.6084/m9.figshare.28015913
Lehti, Andrew (2024). Cognitive Impasse: The Self-Perpetuating Cycle of Learned Behaviors and Cognitive Biases. figshare. Journal contribution. https://doi.org/10.6084/m9.figshare.27367785
About the Creator
Andrew Lehti
Andrew Lehti, a researcher, delves into human cognition through cognitive psychology, science (maths,) and linguistics, interwoven with their histories and philosophies—his 30,000+ hours of dedicated study stand in place of entertainment.

Comments
There are no comments for this story
Be the first to respond and start the conversation.