Connect with us

Hi, what are you looking for?

Jewish Business News


They called him Norman: the psychopathic computer of MIT researchers

One of the great mysteries of psychology is how psychopaths are created and what motivates them. But what if the psychopath is not a human at all, but a computer-based on artificial intelligence?

In an experiment at MIT, the possibility of producing artificial intelligence with varying sensitivity was examined.

The researchers called their computer “Norman,” in a gesture to Norman Bates, the murderous motel owner of Alfred Hitchcock’s immortal film Psycho. Hitchcock’s Norman had a bleak worldview, to say the least, about the world and the people inhabiting it. The “Norman,” called by the researchers at MIT, had a similar view.

Please help us out :
Will you offer us a hand? Every gift, regardless of size, fuels our future.
Your critical contribution enables us to maintain our independence from shareholders or wealthy owners, allowing us to keep up reporting without bias. It means we can continue to make Jewish Business News available to everyone.
You can support us for as little as $1 via PayPal at
Thank you.

For the experiment, the researchers created two artificial intelligence systems capable of analyzing images and writing text. The first system was infused with content gleaned from the light areas of the Internet: pictures of cats, birds, and people.

Norman, on the other hand, learned the world through the darker sides of the net. It was entered in images of horrific accidents and murders and materials from Reddit website.

After the two computers were fed different materials from the network, they received the same series of “Rorschach stains” – a visual test designed for psychological evaluation of personality characteristics and cognitive and emotional functions – and were asked to analyze what they see in each picture. These are stains that activate the imagination, and everyone can see them as completely different things. Usually, according to the Rorschach’s explanations, one can understand whether his view of the world is positive or negative.

The case of Ty

When MIT researchers presented the first stain on both computers, the computer that received positive images reported that it saw a “close-up on a vase with flowers.” Norman, on the other hand, saw “a man shot to death.”

In another blot, the first computer saw a man holding an umbrella in the air, while Norman again saw a man shot to death, this time “in front of his screaming wife.”

In one more blot, the first computer saw a wedding cake on a table, while Norman saw a man who had been run over by a speeding driver.

The goal of the experiment was to prove that the greatest importance of artificial intelligence is precisely the information that feeds it

In other words, even when algorithms seem to know “learning the world” and imitating people’s behaviors, the information that flows into them is the deciding factor.

Norman reinforces what scientists have assumed in the past: there are some examples of computers that have received information from the real world and followed him tended to see the world in a certain way.

The most famous is the “Ty”, a robotic chat developed by Microsoft and released to Twitter in 2016. Ty was supposed to talk to users and through their posts to develop and learn about the world. On the first day of its existence, Ty said that people were “super cool.”

But in less than 24 hours Ty had already called for the murder of Jews and black leaders and expressed her affection for Hitler. Microsoft rushed to disable it, but the damage was done.

“Women are housewives”

In May of last year, a report by the independent press organization determined that a learning machine developed for US courts to build a risk assessment for prisoners had overwhelmingly marked black prisoners as more dangerous than white prisoners. This is not a theory at all, but a program used in hundreds of US courts, and its data have served many judges and officials in the legal system.

The report proved that instead of improving the existing bias in the courts, the court actually learned this bias and assimilated it easily.

Another study showed how a computer that received data only from Google’s news service showed a clear gender bias.

When asked to complete the sentence, “Men are computer programmers like women …” – he completed the sentence for “housewives.”

The working assumption in the development of artificial intelligence over the years has been that the more information you receive, the more intelligent the machine will be able to produce quality content.

But the latest studies, including Norman, show that the BigDate theory that dominated Silicon Valley is beginning to change: it’s not just about the amount of information, it’s about its quality.

And as we humans continue to feed our artificial learning systems with problematic information, their worldview will be misplaced.

Norman researchers say their research shows that artificial intelligence planners must think about their creative implications and find new ways to balance the information they feed. But they, too, admit that the field can not be left entirely by programmers and technologists, who in the end are overwhelmingly white and affluent men from California.

I’m not a human being

For many, when talking about artificial intelligence the field still sounds like science fiction. The virtual assistant “Siri” on the phone still can not quite talk to us fluently. Even chatting with a web bot chat in order to get technical support can be, yet, a frustrating experience.

And yet, the field is on the rise. Just a few weeks ago, Google introduced its new feature, which allows the system to call its owner to book a restaurant or schedule a doctor’s appointment.

The conversations in the hall were both impressive and disturbing – the computer sounded like a real person, he could listen to the other side, analyze his answers, offer smart alternatives to every suggestion she made, and be courteous and courteous.

Due to concerns that many people have expressed the idea that computers are starting to communicate with people and listen to them, Google announced that when the service is launched, the caller will declare in advance that he is not a person.

Many scientists have for years expressed their apprehension about the astonishing progress in the field of royal intelligence, without considering the ethical and social consequences that accompany it.

From Stephen Hawking to Elon Musk, many key figures have argued that great effort should be invested in shaping the image of this machine. Or, in less scientific terms, to fight for her “soul.”

Last June Google announced a new initiative in the field of ethics in artificial intelligence machines.

The purpose of the initiative, which is called PAIR (short for People + AI), was very general: “To examine the ways in which people interact with royal intelligence.”

In practice, behind the words was the desire to examine how to prevent artificial intelligence from being racist.

As part of the initiative, the technology giant intends to examine the entire “food chain”

From programmers through researchers who use technology to the simple user –

All in order to make sure that human society can reap the benefits of technology and not face new threats.

By Shai Ringel, Ynet News




You May Also Like

World News

In the 15th Nov 2015 edition of Israel’s good news, the highlights include:   ·         A new Israeli treatment brings hope to relapsed leukemia...


The Movie The Professional is what made Natalie Portman a Lolita.


After two decades without a rating system in Israel, at the end of 2012 an international tender for hotel rating was published.  Invited to place bids...

VC, Investments

You may not become a millionaire, but there is a lot to learn from George Soros.