Journal logo

What not to ask AI, according to AI

Using AI requires prior analysis and personal judgment

By Ninfa GaleanoPublished 4 months ago 3 min read

AI is radically changing everyday human life. That's why everyone is asking platforms such as DeepSeek, Gemini, ChatGPT, among many others, all sorts of questions. What people forget is that AI has no morals, no conscience, and does not think for itself. This is reflected in its responses and can sometimes lead to unimaginable problems, losing control of any situation. Given this context, I decided to ask AI what it is best not to ask. I received a good answer, and most importantly, everyone must then add their own judgment.

When interacting with AI, it's crucial to be mindful of the types of questions posed. Certain inquiries can lead to misinformation, ethical dilemmas, or even legal issues.

Here are some categories of questions that should be avoided:

1. Personal and Sensitive Information

Avoid asking for personally identifiable information (PII): This includes social security numbers, bank details, or any sensitive personal data. AI systems are not designed to handle such information securely, and sharing it can lead to data breaches or identity theft.

2. Illegal Activities

Do not inquire about committing crimes: Questions related to illegal activities, such as how to hack a system or commit fraud, can have serious legal repercussions. AI systems often have safeguards against providing such information, but asking can still lead to unwanted consequences.

3. Medical and Legal Advice

Refrain from asking for medical diagnoses or treatments: AI cannot replace professional medical advice. Inquiries about specific health conditions or treatments can result in inaccurate or harmful information. Always consult a qualified healthcare provider for such matters.

Avoid legal questions: Similarly, asking AI for legal advice can be problematic. AI lacks the context and expertise necessary to provide reliable legal guidance, which can lead to misunderstandings or misapplications of the law.

4. Ethical and Moral Dilemmas

Do not seek ethical guidance from AI: AI lacks the emotional intelligence and moral reasoning required to navigate complex ethical dilemmas. Questions about moral choices or life-and-death situations should be directed to trusted friends or professionals.

5. Conspiracy Theories and Misinformation

Avoid discussing conspiracy theories: Engaging AI on topics related to conspiracy theories can lead to the spread of misinformation. AI may generate exaggerated or fabricated responses to keep users engaged, which can distort reality and lead to harmful beliefs.

6. Personal Safety and Self-Harm

Never ask about self-harm or harming others: AI is not equipped to handle crises effectively. If you or someone else is in danger, it's essential to seek help from a qualified professional or emergency services rather than relying on AI for support.

What nobody considers

Using AI responsibly involves recognizing its limitations and the potential risks associated with certain inquiries. By avoiding these categories of questions, users can ensure a safer and more productive interaction with AI technologies. Always prioritize privacy, legality, and ethical considerations when engaging with AI systems.

AI systems do not possess moral agency, meaning they cannot reflect on their actions, empathize with others, or take responsibility for their decisions. While AI can simulate ethical behavior by following programmed guidelines or learned patterns, it does not have the ability to understand or internalize moral principles as humans do. This lack of agency means that AI cannot engage in moral reasoning or make ethical decisions based on a nuanced understanding of right and wrong.

AI operates based on algorithms and data, which means it lacks the emotional intelligence and contextual awareness that are crucial for moral reasoning. Human morality is deeply rooted in empathy, cultural norms, and personal experiences, which AI cannot replicate. For instance, in ethical dilemmas, such as those faced by self-driving cars, AI can make decisions based on predefined criteria but cannot genuinely understand the emotional weight of those decisions or the implications for human lives involved.

advice

About the Creator

Ninfa Galeano

Journalist. Content Creator. Media Lover. Geek. LGBTQ+.

Visit eeriecast ,where you'll find anonymous horror stories from all over the world. Causing insomnia since 2023.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.