Second Life Treachery
If Second Life users intend to lie, do they create avatars in their own image? At the University of Nebraska, a team of researchers has conducted a study to determine whether using an avatar decreases anxiety about deceiving others.
In their study, the researchers randomly divided a group of students into liars and truth tellers. And in both groups, students were allowed to create an avatar as they saw fit.
For a comparison, the researchers had another group of students lie to one another over a text-only instant-messaging system. After they did so (anonymously), all the students filled out a survey to report their levels of anxiety.
The researchers found that the liars were more likely than the truth tellers to choose avatars that had an appearance different from themselves, and that lying avatars felt less anxiety about their deceit than did their text-only counterparts. The researchers conclude,
By selecting an avatar that is different from oneself (i.e., ”putting on a mask”), the deceiver may perceive a greater distance from their conversation partner and a reduced likelihood that the deception can be detected. Therefore, deceivers use avatars to further increase their anonymity online.
The team’s research brings to mind a point that Lawrence Lessig made in his book about the Internet Code: Version 2.0: he said it could offer users a veritable Ring of Gyges. The Ring of Gyges, you might recall, is an imaginary ring that bestows invisibility on its wearer. Plato thought up the ring as a way to test people’s sense of justice: while wearing the ring, as one of Plato’s interlocutors muses in The Republic, “no man can be of such an iron nature that he stands fast in justice.” The question, then, is whether we would do wrong if we knew we would suffer no consequences for it.
Unfortunately, the answer appears to be yes.
That a great number of vanilla people roam the virtual world naked or wildly punked out or in goth-rock chic is of no comfort to this user. I now know that they are more likely to lie to me. But then again, isn’t that part of the fantasy?
Keep Reading
Most Popular
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
The problem with plug-in hybrids? Their drivers.
Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.
How scientists traced a mysterious covid case back to six toilets
When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.
Google DeepMind’s new generative model makes Super Mario–like games from scratch
Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.