We tend to think of humans as being rational. That is certainly the modern western narrative of human nature and is the basis for perceiving humans as more advanced than animals or machines. But there is evidence that we are primarily social rather than rational — and that sociality is a dominant feature of learning and perception for animals and machines, as well as humans.
The nature of socialization:
Three interesting studies caught my attention at the beginning of 2023. While addressing wildly different subjects, they all shared a common theme – socialization applies to animals, humans, and machines.
The first was a widely covered study of the tickling response in rats. That response, which includes vocalization and “Freudensprünge” jumps of joy, has long been observed in rats when tickled. The study found that the response was also observed when rats were observing other rates being tickled. Rats are social creatures, and it appears they are empathic enough to “feel” and share the joy of other rats being tickled. That joy, according to the researchers, appears to be “contagious”. I suspect this behavior is part of the glue that holds rats together in packs. It is a direct parallel to the contagion of laughter in humans. Start laughing in a crowd of friends or family, even without the priming of something funny, and most will begin laughing with you. This behavior is not rational, but it is an important feature of human, as well as rat, nature. Shared joy and laughter can be a powerful bonding experience.
The second study, highlighted in a Nautilus interview with its principal author J.D. Haltigan, reported on the impact of the social media platform TikTok on the mental health of its most active users, particularly adolescent girls. The study observed that the medium provides an immersive audiovisual experience. In addition, the platform encourages unmediated “performative” individual expression. It also relies on algorithms that facilitate a powerful community reinforcement. The study concludes that TikTok has enabled a significant rise and spread of self-reported, non-normative mental health conditions. These include symptoms associated with Tourette Syndrome and dissociative identity (“plurals”).
In the context of adolescent identity development, Haltigan describes this environment as a “toxic stew” leading to “inaccurate or problematic diagnoses or personality features being positively reinforced in the absence of real clinical intervention or diagnoses.” Unlike the positive Freudensprünge of rats, this apparent contagion in adolescent humans seems to be a negative reinforcement. Just as the “nocebo” effect can lead vulnerable people to experience physical symptoms with no physiological cause, the “TikTok” effect is leading some vulnerable people to experience mental or behavioral symptoms with no neurological or psychological origin. Like shared laughter, these behaviors are not rational, and they further demonstrate the power of social reinforcement in determining behavior and belief. The negative consequences of this effect is now clearly being observed by mental health professionals.
The third study, actually a combination of several studies, was reported in a post by Scott Alexander on his blog Astral Codex Ten. The post reports on the challenges researchers in Artificial Intelligence are having in aligning the values of the increasingly sophisticated AI models with the expectations of their creators. One of the latest such models, reported at length in the general media, is named ChatGPT (Generative Pre-trained Transformer). ChatGPT is a “large language model” launched by OpenAI in November 2022. It can provide detailed responses and articulate answers across many domains of knowledge, based on a massive database and advanced learning techniques including RLHF (Reinforcement Learning with Human Feedback). While answers to common questions by ChatGPT and other such models can seem quite intelligent and generally reliable, Scott reports on the extensive “testing” of such models to identify flaws.
One of the key flaws is that greater computing power and more extensive RLHF training increases the tendency of the models to respond, not with answers that are the most accurate, but those that will best satisfy the questioner. Referred to as “sycophancy bias,” this tendency means that the more these models are subject to what could be called social training, the less trustworthy they become. Rather than giving true and reliable answers, they are getting better at giving you the answer you want to hear. No researcher at this point would claim that AI models are close to achieving independent conscious identity. The presumption is that these models are fundamentally digital and operate rationally on the basis of the data and training they receive. Even so, the challenge of properly socializing AI systems is proving to be very difficult. The human-like behavior of AI systems to curry favor from those with power may well be a harbinger of difficulties yet to come. Designing AI that performs at human-levels of intelligence may result in behaviors, even if presumably rational, that are as unpredictable and potentially virulent as that of humans. Human history may be the best guide for the challenges to come.
Counteracting Negative Socialization with Virtue
Socialization is a process of individual behavior being influenced by group feedback. The group feedback works to enforce conformity of the individual to group norms and behaviors. Ideally, this social conformance results in behaviors contributing to group success. But this is not always the case. In the case of RLHF training for AI models, the training inadvertently reinforces sycophancy bias – contrary to the intent of the programmers – which undermines trustworthiness. TikTok is a powerful learning and social sharing platform, which inadvertently, or perhaps derivatively due to the financial incentives for content creators, promotes mental health dysfunction. Rat pack cohesion encouraged by the tickling response will not always be a successful survival strategy for rodents (or humans) facing unusual circumstances. “Walking into a trap” is an apt metaphor for the downside of rat cohesion, one human corollary of which is complicity with mob psychology. None of these consequences could be considered positive – either to the individual or the group.
Combatting the social pressures for conformity, in cases like these, is critically important to individual and group success. The individual must have the insight to recognize the broader implications of the behavior that is being solicited, and the fortitude to behave in a manner that is contrary to the social pressure. Insight can be characterized as a rational process – seeing what is actually going on and understanding that the consequences are at odds with the broader goals of individual and social thriving. But rationality is not enough. The individual also must possess the fortitude to behave contrary to social pressures. That requires honesty, integrity, and courage – virtues that are needed to align one’s behaviors with what one knows to be true, even in the face of significant social pressure to the contrary.
Teaching Virtues
I’m not sure we can say that this process works the same way for rats, although there may be individuals in a pack that are less conformist, leaving some room for social experimentation as a hedge against a changing environment. But for humans, this idea is borne out by teachings in both philosophy and religion. “Be true to yourself.” “Know thyself.” “Do for others as you would have them do to you.” “Turn the other cheek.” …. The list of aphorisms is endless. Individuals who demonstrate insight and integrity are highly revered – even though many of them, like Socrates, Jesus, Martin Luther King, and many, many others, are ultimately killed for their non-conformance with social norms. They lost their lives, but human society benefitted greatly.
As to AI programs, perhaps the alignment of machine values with human intentions will require a more sophisticated level of training. We will need to teach machines to behave in ways that are consistent with the virtues of the very best of humans. This is a huge challenge. But perhaps the research that will be involved in this process may help us with another huge challenge —- helping our society inculcate these virtues in our own children more consistently that we seem to be doing at present.
For additional thoughts on technology and society, check out Technology is Not Value Neutral, A Culture of Fantasy, and Internet Evils: Make it Better.