I Am the "Discrimination Architect" at My Company
To be honest, when I received the "User Experience Breakthrough Award" last quarter, I felt a little uneasy. The card next to the trophy read: "In recognition of outstanding contributions to optimizing the service funnel and increasing core user share." Only a few of us in the team knew what this "funnel optimization" actually did—it quietly "sifted out" a large number of people in the most civilized, unaccountable way possible.

This all started late last year. The company was pushing for an IPO, and the financial reports had to look good. The senior management held a meeting and came back with a vague directive that we all understood: we needed to spend money and server resources on the "sharp edge of the knife." Soon, the data analysis department sent over a report. Using colorful charts, it clearly showed that the "unit service cost" for elderly users, visually impaired users, and those unfamiliar with our app's logic was alarmingly high, while their "commercial value contribution" was pitifully low.
Our VP of Product, who always talks about "Tech for Good," rubbed his hands together in a small meeting and said, "We can't just shut down services; that's too crude. What we need is... a kind of 'intelligent guidance' that makes them feel this place isn't for them, and they leave quietly." He looked at our design team. "This requires extremely high design skill. It must be done elegantly, leaving no trace. Who can lead this?"
Somehow, I raised my hand. On one hand, the challenge sounded very "technical"; on the other, I knew it could bring huge promotion opportunities. The project's internal codename was "The Gardener," meaning pruning away "unhealthy" branches.
My first "masterpiece" was redesigning the "Voice Broadcast" feature.
On the surface, we "thoughtfully" added speech rate adjustment options for visually impaired users, from "Slow" to "Extremely Fast." But the real trick was in the default settings and interaction. We set the default speed for new users directly to "Fast" and buried the adjustment button at the bottom of a three-level settings menu. More importantly, I designed an "intelligent adaptation" logic: if a user listened to three consecutive messages at the "Fast" setting, the system would "considerately" pop up a window asking, "We've detected you've adapted to a faster speed. Should we permanently switch you to 'Extremely Fast' mode?" The options were only "OK" and a small, dimly colored "Not Now." Most users, especially elderly visually impaired users less proficient with phones, would either accidentally or be forced to click "OK."
What made me feel even more uneasy later was that "Safety Guardian" process.
For elderly users, we designed a "cognitive verification" step before transfers or payments. It wasn't a simple password but a mini-game: four rapidly moving, color-changing shapes would flash on the screen, and the user had three seconds to identify the one with a different color. This leveraged the natural decline in reaction time and color discrimination in the elderly. Upon failure, they weren't directly rejected but entered a more cumbersome "backup verification channel"—asked to find a specific ID photo taken three months ago from their phone's gallery.
My colleague, a young designer fresh out of school, said excitedly during testing, "Bro, this design is brilliant! Fully compliant, yet absolutely effective. My grandma would definitely fail this." I smiled back then, feeling a bit proud. We crafted the failure prompt to sound incredibly sincere: "For the safety of your funds, we have initiated a high-level verification. Thank you for your understanding." Successfully shifting the responsibility to the user's "inability" and our "commitment to security."
The turning point came without warning.
My aunt, a retired teacher who had just learned to use a smartphone for short videos, tried to use our app to transfer money for my cousin's tuition—her first time. She called me, her voice full of helplessness and self-blame: "Xiao Bin, does your software look down on us old folks? Those jumping balls… my eyes are tired from watching, and I can't click the right one… Does it think I'm senile, not worthy of using it?"
While I taught her a clumsy way to bypass the verification over the phone (eventually using my internal access to complete the payment for her), I looked at the real-time "Safety Guardian Interception Data" on my computer screen. One interception record had a user ID suffix matching my aunt's phone model and region. The cold record stated: "User failed cognitive verification 3 times. Directed to manual customer service channel (queue: 56). Suggested tag: High risk, low digital literacy."
"Low digital literacy." That was the five-character tag the system gave my aunt.
I stared at those words. The office air conditioning was strong, but a layer of cold sweat instantly broke out on my back. My meticulously architected, logically flawless "elegant design" now felt like a mirror, showing me my own reflection. I realized all my "cleverness" wasn't used to create value but to systematically create difficulties, packaging that difficulty as "for your own good."
After that day, I could no longer participate in the "Gardener Project" brainstorming sessions with a clear conscience. Hearing my colleagues enthusiastically discuss how to more "naturally" increase the mis-click rate of a certain button made me feel nauseous. I know the system is still running, even more efficiently. I still draw my salary from it, and my promotion path seems smoother because of it.
About the Creator
天立 徐
Haha, I'll regularly update my articles, mainly focusing on technology and AI: ChatGPT applications, AI trends, deepfake technology, and wearable devices.Personal finance, mental health, and life experience.
Health and wellness, etc.



Comments
There are no comments for this story
Be the first to respond and start the conversation.