Multiple people weighing in on a problem might intuitively seem to lead to a better outcome, but as any manager will tell you, keeping a large team on task is not easy. However, recent advances in artificial intelligence (AI) could make harnessing this collective wisdom much easier, making us more effective at our jobs and better able to solve pressing social challenges.
“We know that the future of work is all about collaboration and problem solving,” says Peter Baeck, who leads the Centre for Collective Intelligence Design at Nesta, a UK charity that funds and promotes research into groundbreaking ideas. “One of the most obvious opportunities is using AI to better create connections within often quite chaotic messy networks of people who are working on a common challenge.”
The biggest factor affecting how collectively intelligent a group can be is the degree of coordination among its members, says Anita Woolley, a leading expert in organisational behaviour at Carnegie Mellon University. Smart tools can be a boon in this area, which is why Woolley is working with colleagues to develop AI-powered coaches that can track group members and provide prompts to help them work better as a team.
“The roles these [AI] tools can play are virtually endless,” says Woolley. “Fostering communication among the different components, reminding people of stuff they might have forgotten, being a repository of information, helping the group coordinate its decision making.”
There are already promising examples of how AI can help us better pool our unique capabilities. San Francisco start-up Unanimous AI has built an online platform that helps guide group decisions. They’ve looked to an unlikely place to guide their AI: the way honeybees make collective decisions.
“We went back to basics and said, ‘How does nature amplify the intelligence of groups?’,” says CEO Louis Rosenberg. “What nature does is form real-time systems, where the groups are interacting all at once together with feedback loops. So, they're pushing and pulling on each other as a system, and converging on the best possible combination of their knowledge, wisdom, insight and intuition.”
Their Swarm AI platform presents groups with a question and places potential answers in different corners of their screen. Users control a virtual magnet with their mouse and engage in a tug of war to drag an ice hockey puck to the answer they think is correct. The system’s algorithm analyses how each user interacts with the puck – for instance, how much conviction they drag it with or how quickly they waver when they’re in the minority – and uses this information to determine where the puck moves. That creates feedback loops in which each user is influenced by the choice and conviction of the others allowing the puck to end up at the answer best reflecting the collective wisdom of the group.
Several academic papers and high-profile clients who use the product back up the effectiveness of the Swarm AI platform. In one recent study, a group of traders were asked to forecast the weekly movement of several key stock market indices by trying to drag the puck to one of four answers — up or down by more than 4%, or up and down by less than 4%. With the tool, they boosted their accuracy by 36%.
Credit Suisse has used the platform to help investors forecast the performance of Asian markets; Disney has used it to predict the success of TV shows; and Unanimous has even partnered with Stanford Medical School to boost doctors’ ability to diagnose pneumonia from chest X-rays by 33%.
But designing technology that can mesh well with human teams can be surprisingly difficult, says Woolley, who is working with colleagues to develop AI-powered coaches that can track group members and provide prompts to help them work better as a team.
In one study, her team tried out three different tools designed to maximise collective intelligence: one that gave real-time feedback on team members’ effort, another that helped divide and assign tasks and a chatbot that helped the group talk about their skills and expertise.
The first tool appeared to demotivate people, while the second ended up distracting teams with unnecessary planning. Only the final tool helped, by ensuring the best-suited people worked on each task. “What we keep discovering is that it's easier to be harmful or to create things that annoy people than it is to create things that are honestly helpful,” says Woolley.
It is still really difficult to build AI with social intelligence, she says, because machines continue to struggle to pick up the nuanced social cues that guide so much of group dynamics. It’s also evident from Woolley’s research that these kinds of systems only work if humans truly trust AI decision making, and if users are only receiving gentle nudges from the system. “As soon as it's heavy handed, people are looking for ways to disable it,” says Woolley.
But the reasons why it can be hard to combine machines and humans are also central to why they work so well together, says Baeck. AI can operate at speeds and scales far out of our reach, but machines are still a long way from replicating human flexibility, curiosity and grasp of context.
A recent report Baeck co-authored with Nesta senior researcher Aleks Berditchevskaia identified a number of ways AI could enhance our collective intelligence. This includes helping make better sense of data, finding better ways to coordinate decision making, helping us overcome our inherent biases and highlighting unusual solutions that are often overlooked.
But the report also showed that combining AI tools with human teams requires careful design to avoid unintended consequences. And there is currently a dearth of research into how groups react to being corralled by AI, says Berditchevskaia, making it hard to predict how effective these systems will be in the wild.
“It can potentially stretch us in new ways or enhance our speed when we need to be reacting quicker,” she adds. “But we're still at very early stages of understanding and being able to navigate the individual reactions to those kinds of AI system in terms of issues of trust and how it affects their own sense of agency.”
Our combined wisdom can also help give a more human element to AI technology, and better guide its decisions.
At London-based start-up Factmata, which has built an AI moderation system, the company has enlisted more than 2,000 experts, including journalists and researchers, to analyse online content for things including bias, credibility or hate speech. They then used this analysis to train a natural-language processing system to automatically scan web pages for problematic content.
“Once you have that trained algorithm it can scale to those millions of pieces of content across the internet,” says CEO Dhruv Ghulati. “You're able to scale up the critical assessment of those experts.”
While AI is often trained on data labelled by experts in a one-off process, Factmata’s experts continually refresh the training data to make sure the algorithms can keep up with the ever-shifting political and media landscape. They also let members of the public give feedback on the AI’s output, which Ghulati says ensures it remains relevant and free of bias itself.
Entwining ourselves and our decisions every more tightly with AI is not without risks, however. The synergy between AI and collective intelligence works best the more information we give the machine, says Woolley, which involves difficult choices about how much privacy we’re happy to surrender.
We can have some very smart individuals working on different components in isolation, but if we don't have coordination it's really hard to make any headway – Anita Woolley
But given the world’s complex multifaceted challenges, such as climate change and pandemics, harnessing our collective intelligence more effectively is essential, she says.
There are already examples of how the approach could tackle these kinds of problems. Researchers at Carnegie Mellon are currently forecasting the Covid-19 pandemic in real time by using machine learning to combine voluntary symptom surveys, doctors’ reports, lab statistics and Google search trends. Elsewhere, the Early Warning Project in the US has identified countries at greatest risk of atrocities using a combination of crowd-sourced predictions, expert assessments and machine learning algorithms.
“We can have some very smart individuals working on different components in isolation, but if we don't have coordination it's really hard to make any headway,” says Woolley. “I think it's going to be critical to helping these different components actually coordinate in the ways needed to address the biggest collective action problems.”
More about: AI