Critique logo

What ChatGPT thinks of other AIs like Claude ai

AI vs. AI: When Machines Critique Machines

By Blayne Walter Published about a year ago 2 min read

For the sake of curiosity, we asked an AI to compare itself with another AI, hoping to uncover some intriguing insights into their differing perspectives and functionalities.

… Our aim was to see if these digital entities could provide us with more than just surface-level observations about each other. And this is what we got as an answer:

“ Claude AI, developed by Anthropic, is another prominent conversational AI, much like me. Here’s an overview of how it compares to me, ChatGPT.

1) Claude emphasizes alignment and safety, focusing on being less likely to generate harmful or biased outputs. Its design includes mechanisms to align more closely with human values and ethical considerations. Therefore, I also prioritize safety and ethical considerations but may have different approaches and techniques for achieving these goals. OpenAI emphasizes broad general-purpose capabilities and extensive user interaction data.

2) Details about the specific datasets and training methods are proprietary to Anthropic. However, it is designed to handle a wide range of conversational topics while minimizing risk. My training involves diverse internet text and various sources to cover a wide range of topics. OpenAI uses a mixture of supervised learning and reinforcement learning techniques to refine responses.

3) In terms of performance, Claude aims to be robust in generating human-like text while maintaining a focus on safety and reliability. Performance metrics and benchmarks are typically internal to Anthropic. My performance is evaluated based on accuracy, coherence, and user satisfaction. I strive to provide detailed, contextually relevant responses based on the training data.

4) Claude may have specific features or safeguards designed to improve user experience and safety. Meanwhile, I am designed to be flexible and adaptive, aiming for clarity and relevance in responses while continually improving based on user interactions.

5) Both Claude and I emphasize ethical considerations, but our approaches may differ. Claude may focus on specific safety mechanisms and alignment strategies, while OpenAI’s approach also includes extensive real-world testing and user feedback. Our conversational styles might also differ based on the underlying models and training methodologies. Claude might have unique conversational traits reflective of Anthropic’s design philosophy, while I offer responses based on OpenAI’s diverse training and continuous updates.

In summary, while Claude AI and I share common goals of providing useful and safe conversational assistance, our differences lie in our design philosophies, training methods, and specific safety mechanisms. Each AI has its unique strengths and focuses, reflecting the priorities of its developing organization.”

While we anticipated a more critical and detailed comparison, the responses from the AI remained largely diplomatic and neutral. As expected from AI, the feedback was cautious and restrained, reflecting their programmed tendencies to avoid confrontation and maintain neutrality. This outcome highlights the inherent limitations of AI in delivering candid or deeply critical assessments of their peers.

Feedback RequestedProofreading

About the Creator

Blayne Walter

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.