Futurism logo

AutoGPT - the Next Frightening Step in AI

With real-time web connection and automatic goal breakdown to task level, it uses ChatGPT4 as it accelerates our race to the cliff edge

By James MarineroPublished 3 years ago 3 min read
Autogpt logo and author overlays. Credit: By AutoGPT Development Team — agpt.co, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=131377085; skull overlay: By Unknown author — https://vectorlogo4u.com/skull-and-crossbones-sign-svg/, Public Domain, https://commons.wikimedia.org/w/index.php?curid=516223

It’s barely six months since the launch of ChatGPT which raised public consciousness about AI and started a few political alarm bells ringing. But now it seems to be getting worse by the day.

Rapid developments

I wrote a few articles about the dated-ness of the ChatGPT training data and expressed other concerns, nay, fears. Barely had my words been written than we were swamped by a raft tools which used ChatGPT and saw an obscene scramble by every man and his dog to integrate the software into their systems. The motivation was profit and self-defence.

That was followed rapidly by Chat GPT4 and even more tools.

And now there’s an even more ominous development — AutoGPT.

Increasing acceleration

Auto-GPT is an “AI agent” that given a goal in natural language, will attempt to achieve it by breaking it into sub-tasks and using the internet and other tools in an automatic loop. It uses OpenAI’s GPT-4 or GPT-3.5 APIs, and is among the first examples of an application using GPT-4 to perform autonomous tasks. (Wikipedia)

This is how Forbes saw it:

Impressive as they are, until now, LLMs [large language models] have been limited in one significant way: They tend to only be able to complete one task, such as answering a question or generating a piece of text, before requiring more human interaction (known as prompts).

This means that they aren’t always great at more complicated tasks that need multi-step instructions or are dependent on external variables.

Enter Auto-GPT — a technology that attempts to overcome this hurdle with a simple solution. Some believe it may even be the next step towards the holy grail of AI — the creation of general, or strong, AI.

And now, within days, we have follow-on products from AutoGPT. Products such as Meta-Trader, ‘for traders everywhere’, as if we didn’t have enough problems with trading systems already.

The Auto-GPT MetaTrader Plugin enables traders to connect their trading accounts to the Auto-GPT platform, providing them with access to an advanced tool that allows them to leverage of AI Agents to generate trading signals and enhance their decision-making capabilities. — MarketWatch

And this is in a month when we saw the departure of the ‘Godfather of AI’, Geoffrey Hinton, from Google, citing concerns about the dangers of the technology.

Project planning

In another life I spent many years in a career which involved the development of software, its testing and implementation. Project planning and Microsoft Project was a daily staple for me. And after the planning, making it all happen.

Now I’m wondering what AutoGPT would come up with as an interface to MS Project.

Bear in mind that AutoGPT has real-time links to the internet, the news feeds and goodness knows what other data links in the hands of people both good and bad.

Goal setting

I’m also wondering about the complexity and ‘vagueness’ of the goals that AutoGPT might be able to handle.

How about these goal prompts:

  • Prepare a plan to make a 5% daily profit trading NASDAQ AI equities
  • Develop a plan to subvert UK politics
  • Design a plan to reverse the failures of the Russian Armed Forces
  • Outline the steps necessary for China to successfully invade Taiwan without starting a nuclear war

You don’t have to be a rocket scientist to see where this is all going.

Conclusion

The pace of AI development is terrifying, and politicians need to act very quickly to prevent a catastrophe.

It will not take long before these systems are penetrating cyber defences (for example, those of civilian infrastructure) and connecting with other systems under the putative control of bad actors.

Image by Richard Duijnstee from Pixabay

Some of the dangers of AI chatbots were “quite scary”, he [Geoffrey Hinton] told the BBC, warning they could become more intelligent than humans and could be exploited by “bad actors”. “It’s able to produce lots of text automatically so you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates, things like that.”

But, he added, he was also concerned about the “existential risk of what happens when these things get more intelligent than us. — The Guardian (ibid.)

The doomsday clock is ticking down for mankind, assuredly.

Worth — exciting — reading in the context of connected systems running amok (no affiliation):

***

My novels are available at my Gumroad bookstore. Also at Amazon and Apple

Canonical link: This story was first published in Medium on 9 May 2023

artificial intelligencebuyers guidefuturehumanityproduct reviewsciencescience fictiontech

About the Creator

James Marinero

I live on a boat and write as I sail slowly around the world. Follow me for a varied story diet: true stories, humor, tech, AI, travel, geopolitics and more. I also write techno thrillers, with six to my name. More of my stories on Medium

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.