Each participant assumed the role of investor and could transfer money to a “trustee,” in whose hands the funds would triple. The trustee would then transfer all the money, part of it, or none of it back to the investor. If the investor entrusted the trustee with all of his money, he could maximize his profits if the trustee was reliable and fair. Conversely, he could lose everything if the trustee was not fair. The trust game is perfectly suited to establish the investor’s level of trust (i.e., the higher the trust, the higher the transfer). Each participant played with three different types of trustees: seemingly reliable humans, seemingly unreliable humans, and the computer (i.e., a fully neutral device). By manipulating the partners’ trustworthiness, we sought to determine the extent to which OT impairs sensitivity to potential signs of dishonesty.
In one part of the game, participants were told that they would play 10 rounds with the computer, which would randomly determine the back transfers; in another part, participants were told that they would play online with real people. We gave participants a brief description of their partner before each round. Based on a pretest, these descriptions were manipulated to induce high or low trust: We combined trustworthy academic fields (e.g., philosophy) and activities (e.g., practicing first aid) to make some partners seem reliable, and untrustworthy academic fields (e.g., marketing) and activities (e.g., playing violent sports) to make other partners seem unreliable. The main effect of partner type confirms that these descriptions were effective in inducing trust or mistrust in this study. Each participant played 10 rounds with 10 different partners (5 trustworthy, 5 untrustworthy). No back-transfers feedback was provided during the experiment, and presentation order was randomized. Before leaving the laboratory, participants were asked to guess the group (OT or control) to which they had been assigned.
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Tuesday, August 24, 2010
Oxytocin makes people trusting, but not gullible.
Mikolajczak et al. demonstrate that oxytocin (OT) is not the magical “trust elixir” described in the news, on the Internet, or even by some influential researchers. They design experiments to show that it does not make people indiscriminately prosocial (trusting to a fault). They used a customized version of the trust game that manipulated partners’ trustworthiness and measured participants’ investment in each partner. They found higher investment by participants who received a nasal OT spray than by control participants, unless there were cues that a partner might not be trustworthy. They also observed a significant effect of OT when the partner was a computer suggesting that OT’s effect may be primarily moderated not by the human versus nonhuman nature of the partner, but rather by the perceived risk inherent to the interaction. In case you are interested, here is their description of the setup:
No comments:
Post a Comment