Skip to Content
Uncategorized

Algorithm Measures Human Pecking Order

The way people copy each other’s linguistic style reveals their pecking order.

Measuring power and influence on the web is a matter of huge interest. Indeed, algorithms that distill rankings from the pattern of links between webpages have made huge fortunes for companies such as Google.

One the most famous of these is the Hyper Induced Topic Search or HITS algorithm which hypothesises that important pages fall into two categories–hubs and authorities–and are deemed important if they point to other important pages and if other important pages point to them. This kind of thinking led directly to Google’s search algorithm PageRank

The father of this idea is John Kleinberg, a computer scientist now at Cornell University in Ithaca, who has achieved a kind of cult status through this and other work. It’s fair to say that Kleinberg’s work has shaped the foundations of the online world.

Today, Kleinberg and a few pals put forward an entirely different way of measuring power and influence; one that may one day have equally far-reaching consequences.

These guys have worked out how to measure power differences between individuals using the patterns of words they speak or write. In other words, they say the style of language during a conversation reveals the pecking order of the people talking.

“We show that in group discussions, power differentials between participants are subtly revealed by how much one individual immediately echoes the linguistic style of the person they are responding to,” say Kleinberg and co.

The key to this is an idea called linguistic co-ordination, in which speakers naturally copy the style of their interlocutors. Human behaviour experts have long studied the way individuals can copy the body language or tone of voice of their peers, some have even studied how this effect reveals the power differences between members of the group.

Now Kleinberg and so say the same thing happens with language style. They focus on the way that interlocutors copy each other’s use of certain types of words in sentences. In particular, they look at functional words that provide a grammatical framework for sentences but lack much meaning in themselves (the bold words in this sentence, for example). Functional words fall into categories such as articles, auxiliary verbs, conjunctions, high-frequency adverbs and so on.

The question that Kleinberg and co ask is this: given that one person uses a certain type of functional word in a sentence, what is the chance that the responder also uses it?

To find the answer they’ve analysed two types of text in which the speakers or writers have specific goals in mind: transcripts of oral arguments in the US Supreme Court and editorial discussions between Wikipedia editors (a key bound in this work is that the conversations cannot be idle chatter; something must be at stake in the discussion).

Wikipedia editors are divided between those who are administrators, and so have greater access to online articles, and non-administrators who do not have such access. Clearly, the admins have more power than the non-admins.

By looking at the changes in linguistic style that occur when people make the transition from non-admin to admin roles, Kleinberg and co cleverly show that the pattern of linguistic co-ordination changes too. Admins become less likely to co-ordinate with others. At the same time, lower ranking individuals become more likely to co-ordinate with admins.

A similar effect also occurs in the Supreme Court (where power differences are more obvious in any case).

Curiously, people seem entirely unware that they are doing this. “If you are communicating with someone who uses a lot of articles — or prepositions, orpersonal pronouns — then you will tend to increase your usage of these types of words as well, even if you don’t consciously realize it,” say Kleinberg and co.

This effect has only become clear now by number-crunching large volumes of text and transcripts. “Our work is the first to identify connections between language coordination and social power relations at large scales, and across a diverse set of individuals and domains,” say Kleinberg and co.

That has potential applications in all kinds of scenarios where there is a reasonably large body of discussion. One thing Kleinberg and co could do very easily is rank Wikipedia editors by their power (although, to be clear, they have not done this in their paper).

It’s not hard to imagine a company doing the same thing with internal email records to determine which individuals wield the most power and influence. Google already processes the contents of private emails for purposes of providing adverts, why not also for determining the power ranking of individuals in Google+?

It might also be possible to rank bloggers, tweeters and facebook pages in this way too, possibly by combining the technique with other ranking systems.

And if this kind of analysis can be done on the fly during real conversations, it might be possible to provide important feedback during negotiations, interviews, legal statements and the such like

This looks to be important work with far-reaching implications. Not least of these is for privacy. It’s hard to imagine that anyone intends to reveal their pecking order when conducting a conversation and yet Wikipedians in particular and netizens in general may well be doing so regularly and unknowingly.

It is a salutory lesson that information judged innocuous at one time may turn out at a later date to be far more revealing than is possible to imagine.

Ref: arxiv.org/abs/1112.3670: Echoes Of Power: Language Effects And Power Differences In Social Interaction

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.