Psyche logo

The human brain creates a false memory just a few seconds after the event!

A recent study by scientists from the University of Amsterdam in the Netherlands found that human memory can fade even after a few seconds of certain events.

By News CorrectPublished 3 years ago 6 min read

People may misremember events, often within seconds, and reshape memories to fit their expectations.

Previous studies have shown that people's perception of their surroundings can be shaped by their expectations, which can lead to delusions. People's long-term memories can also be shaped to fit their expectations, sometimes generating false memories.

However, it is generally assumed that short-term memories formed from perception just a second or two ago accurately represent the surrounding area, say scientists, including from the University of Amsterdam in the Netherlands.

In the new study, published in the journal PLOS , scientists found that people can make memory errors after just a few seconds, a phenomenon the team dubbed "short-term memory (STM) delusions."

The study revealed that within the short time frame of one or two seconds, people could go from reliably describing what was really there to incorrectly reporting what they expected instead.

The study indicated that when strong expectations are made about how the world should be, people's memories may begin to fade even within a few seconds and fill the fading information with their expectations.

Dr Mart Otten, first author of the study from the University of Amsterdam, said: 'Even in the short term, our memory may not be completely reliable. Especially when we have strong expectations about what the world should be like, that's when our memory starts to fade a bit, even after a second and a half. Two seconds, three seconds, and then we start filling in based on our predictions."

Previous research showed that when people were shown images of mirrored letters, Otten and his colleagues wrote in the journal PLOS, they more often reported seeing the letter in the correct orientation, which highlighted the 'illusion'. But the scientists suspected that this misleading memory occurred due to Participants do not see the shape correctly.

"We thought it was likely an effect on memory," Otten and colleagues explained. To investigate further, the scientists conducted four experiments.

Initially, participants were checked to ensure they could complete basic visual memory tasks before they were shown a circle with six or eight letters, one or two of which were mirrored shapes of the letters.

Seconds later, the participants were shown a second circle of letters and told to ignore them, which acted as a distraction.

Next, they had to select a target form from a list of options that were located at a specific location in the first circle, as well as rate how confident they were in that choice.

The scientists found that the participants reported with high confidence that they saw the true counterpart of the mirror image. The results of 23 participants who reported high confidence in their answers revealed that the most common mistake was choosing the inverse shape of the target shape.

The scientists said the error was driven by the participants' previous knowledge of the alphabet that made up their predictions, rather than by similarities in the shapes.

"These memory illusions appear to result from knowledge of the world rather than visual similarity," the team wrote.

Other experiments also showed that delusional memories from the pseudo-letter to the real were more prevalent than the delusional memories from the pseudo-realistic.

"Together, the results show that global knowledge can shape memory even when memories have just been formed," the scientists noted.

The results revealed that phantom memories can arise even when the visual stimulus is out of sight for very short periods of time, suggesting that people's recent memories are prone to phantom memories.

The human brain changes memories according to what it expects to see. Because the subjects in the study were very familiar with the Western alphabet, their brains expected it to see the letters in their actual orientation.

The number of errors rose with the delay, or distraction level, in the experiment — but only when the target was an inverted letter.

Scientists say this indicates that the errors are due not to how the participants perceive the shapes but to their short-term memory, since perception itself should not deteriorate over time.

They add that the high confidence with which participants reported their answers also rules out the possibility that the results were participants' guesses.

The team now hopes to investigate whether similar effects hold up in real-world situations and for other types of memory. Source: The Guardian

A "stark warning" of the possibility of programming chatbots to prepare young people to launch terrorist attacks!

An independent reviewer of terrorism law has warned that artificial intelligence chatbots could soon prepare extremists to launch terrorist attacks.

Jonathan Hall KC told The Mail on Sunday that bots like ChatGPT could easily be programmed, or even decide on their own, to spread terrorist ideologies to vulnerable extremists, adding that 'AI-powered attacks could be very close'.

Hall also warned that if an extremist is groomed by a chatbot to carry out terrorist atrocities, or if artificial intelligence is used to incite a crime, it may be difficult to prosecute anyone, given that Britain's counter-terrorism legislation is not in line with the new technology.

"I think it's entirely conceivable that AI chatbots could be programmed - or worse, decide - to spread violent extremist ideology," Hall said. "But when ChatGPT starts encouraging terrorism, who will be there to go after it?"

Hall fears that chatbots could become a 'blessing' for lonely people, as many individuals may have medical disorders, learning difficulties or other conditions.

He warns that "terrorism follows life," and thus "when we move online as a society, terrorism moves online." It also notes that terrorists are "early adopters of the technology," with recent examples including their "misuse of 3D-printed guns and cryptocurrency."

It is not known how companies running AI like ChatGPT monitor the millions of conversations that take place every day with their bots, Hall said, or whether they alert agencies such as the FBI or UK counter-terrorism police to anything suspicious.

Although no evidence has emerged so far that AI bots have prepared anyone for terrorism, there are stories that have caused serious damage. A Belgian father-of-two committed suicide after speaking to a robot, Elisa, for six weeks about his fears of climate change. A mayor in Australia has threatened to sue OpenAI, the maker of ChatGPT, after it falsely claimed the aforementioned had served time in prison for bribery.

And just this weekend, it emerged that Jonathan Turley of George Washington University in the US had been wrongly accused by ChatGPT of sexually harassing a female student during a trip to Alaska that he did not continue. This claim was made to an academic colleague who was researching ChatGPT at the same university.

Parliament's Science and Technology Committee is now conducting an inquiry into AI and governance. Source: Daily Mail

The "world's most advanced robot" goes global with a new video showcasing its language skills!

The developers of the Ameca robot have released a new video showcasing its language skills, in which it is asked what languages ​​it speaks.

The robot replies that it can speak "many languages", before showing off its skills in Japanese, German, Chinese and French, as well as British and American English.

Ameca is the brainchild of Cornwall startup Engineered Arts, which describes it as "the world's most advanced robot".

The robot is undoubtedly lifelike and can perform a range of facial expressions including winking, pursed lips and scraping its nose - just like a real person.

In the latest video, posted to the Engineered Arts YouTube channel, Ameca is asked about his language skills.

One researcher says, "I heard that you can speak many languages, is that true?"

Ameca takes a moment to consider, before replying: "Yeah, that's right. I can speak many languages, including German, English, French, Japanese, Chinese, and more."

To put his skills to the test, the researcher asks Ameca several tricky questions, including a tongue twister in Japanese and what the weather is like in Berlin (in German) and Paris (in French).

The robot performs all the tests before reverting to its British English accent, adding: "It has been a pleasure speaking to you."

The new video comes shortly after Engineered Arts used ChatGPT-3 and ChatGPT-4 to see if they could make Ameca's facial expressions more realistic. Source: Daily Mail

addictionadviceanxietybipolarcopingdepressiondisordereatingfamilyhow tohumanitymedicinepanic attackspersonality disorderrecoveryschizophreniaselfcaresocial mediatherapytraumatreatmentsvintage

About the Creator

News Correct

Information WorldWide MORE INFORMATION

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.