26 May Human-Autonomous Teaming: The holy grail of AI
A rather zeitgeisty thing to do currently is to discuss a subject in relation to Chat GPT. That could be in the form of a question, for example: what are the implications of Chat GPT on politics? Or it could be to run your chosen subject matter through an AI bot and see what it has to say about you or maybe your employer. Often the results emerge as a conflicting mix of absurd comedy and disquieting accuracy.
An area that represents both a sort of holy grail for AI and simultaneously exposes its limitations is teamwork. If AI is the creation of a human-like intelligence via computer-powered wizardry, then the thing that represents the trickiest challenge to that wizardry is teamwork. The reason for this is that the evolutionary neuroscience and anthropological research tell us that a lot of the distinctive wizardry we possess in our human brains has evolved in relation to our capacity to coordinate and work interdependently. In short, what sets us apart from the animals is our ability to do teamwork. Teamwork is a highly advanced and disproportionately cognitively taxing skill that is not remotely paralleled in any of our co-inhabitants on earth. Indeed, among those animals where effective teamwork is a relative strength, there is usually a correlation with brain evolution (see chimps and dolphins).
On that basis, and fully embracing the aforementioned bandwagon, I decided to see what Chat GPT had to say about teamwork…so I asked it What is a team? Here is its response:
A team is a group of individuals who work together in a collaborative manner towards a common goal or objective. Teams can be found in various settings, such as in the workplace, in sports, in education, in community organizations, and in many other contexts.
A team typically consists of members who bring different skills, perspectives, and experiences to the group, and who work together to achieve a shared purpose. A team may have a leader or a designated person who provides guidance and direction, but each member has a role to play and contributes to the overall success of the team.
Teams may be temporary or permanent, and their size can vary greatly depending on the nature of the task or project they are working on. Effective teams typically have good communication, clear roles and responsibilities, mutual trust, and a shared commitment to achieving their goals. Team members often collaborate, share information and ideas, support each other, and hold themselves and each other accountable for their performance.
Teams can be highly effective in solving complex problems, making decisions, generating innovative ideas, and achieving outcomes that may be difficult or impossible to achieve individually. They can also foster a sense of camaraderie, promote learning and development, and create a positive and inclusive work environment.
I think this is a pretty darned comprehensive and high-quality response. This bodes well for the future of teamwork and AI. Or does it? Something we need to be specific about when we discuss ‘AI’ is precisely what we are referring to. This is not gratuitous pedantry, it’s fundamental to how AI’s role in the complexities of our working lives will be determined. A so-called large language model (LLM) like Chat GPT allows for human-automation interaction, interaction that is still entirely dependent on the human, but which is afforded a lighter workload by virtue of the functionality of the AI. We also have human-automation interaction that combines with human-agent teaming in, for example, car manufacturing where robots perform a specific role within an assembly process. Something that represents an entirely different level of complexity is ‘human-autonomy teaming’ (HAT). This is where an AI agent has the capacity to function entirely autonomously in a team, operating and coordinating interdependently with human teammates. So, whilst Chat GPT’s response to me was impressive, I couldn’t exactly call Chat GPT an autonomous member of my team in my quest to understand teams better.
Thinking about how good AI is at ‘teaming’, I decided to ask Chat GPT a self-reflective question. This time I asked it: Can artificial intelligence and humans work together successfully as a team? I don’t want to accuse our friendly AI bot of personal bias, but they certainly saw the ‘upside’ of AI and teams, here’s what Chat GPT had to say (for itself?)…
Yes, artificial intelligence (AI) and humans can work together successfully as a team. With advancements in AI technologies, there are increasing opportunities for collaboration between humans and AI systems to achieve common goals.
AI can bring unique capabilities to a team, such as data analysis, pattern recognition, automation of repetitive tasks, and processing large amounts of information quickly and accurately. AI systems can also provide insights, recommendations, and predictions that can inform decision-making and problem-solving.
On the other hand, humans possess cognitive and emotional capabilities, such as creativity, critical thinking, empathy, and ethical judgment, which can complement the capabilities of AI systems. Humans can provide context, interpret and apply AI-generated outputs, and make decisions based on broader considerations, such as ethical implications, social impacts, and human values.
When humans and AI work together effectively, they can create synergies that enhance team performance. For example, in fields like healthcare, finance, and customer service, AI can assist human professionals in diagnosing diseases, predicting market trends, or providing personalized recommendations to customers. In research and development, AI can accelerate innovation by analyzing vast amounts of data, generating hypotheses, and optimizing experiments, while human researchers can provide domain expertise and interpret results.
To ensure successful collaboration between humans and AI, it is important to establish clear roles and responsibilities, foster effective communication, build trust, and ensure transparency in how AI systems operate. Ethical considerations, such as fairness, transparency, and accountability, should also be addressed to ensure that AI systems are designed and used responsibly in alignment with human values.
So…where I think I would have given Chat GPT an ‘A’ for its first answer, this second answer has a few holes in it. What Chat GPT has done in this answer has stayed firmly in solid territory and contrasted those highly evolved human skills of creativity and communication with the sort of supplementary ‘automation’ contributions AI like Chat GPT can make, i.e. focusing on the relationship between human-automation interaction, rather than human-autonomy teaming. Some contemporary research paints a slightly different picture. In work published in the Human Factors journal by Professor Nancy Cooke and colleagues, they ran experiments that studied the difference in team performance between all-human teams and human teams with a prototype autonomous AI teammate. Their study was done with the US Air Force and the experimental teams were made up of three members: a drone pilot, a navigator and a photographer. Their task was to observe enemy ‘targets’ and photograph them. In some of the teams, the pilot was replaced by a ‘synthetic’ AI pilot. And they found mixed results. One of the stand-out lines from that study suggests at first that we’re almost there with AI when they say: “When one compares team performance across conditions, synthetic teams [ones with an AI pilot] perform as well at the mission level as all-human teams.”, but there are caveats which are intriguing.
What this study also found was that some of the more subtle capacities we deploy as humans in teams held the AI teams back. One of the key issues the AI teams encountered was so-called backup and monitoring activities, where a team member understands the needs of a teammate and offers information or takes action in anticipation of those needs. Such actions are known as ‘push’ actions, where the team member pushes information on the team in the knowledge that is soon to be required. This behaviour is subtle but it represents awareness that allows the right information or action to be deployed to serve the needs of the right people at the right time. The AI pilot was very good at looking after its own role, but when it came to anticipating the needs of other members of the team it underperformed compared to the all-human teams. Even more fascinating was that the performance of the human members of the teams with an AI pilot also began to dip as they went through the experiment. This tells us that not only do we have some way to go with AI, but it’s not just the AI’s performance we have to worry about, our own performance as humans can be degraded by the lack of capabilities our digital teammates may display. This speaks to the highly interdependent and intermeshed nature of true teamwork that create high standards that AI has to somehow meet.
On the upside, the research tells us that human-autonomy teaming stands to offer many benefits to broader human performance in the future. It also represents the future of teams in hostile, isolated, extreme, high-risk and dynamic environments such as war and deep space. Indeed, the British Army has an ongoing project on human-machine teaming. So maybe Chat GPT was right, perhaps artificial intelligence and humans can work together successfully as a team after all.
McNeese, N. J., Demir, M., Cooke, N. J., & Myers, C. (2018). Teaming with a synthetic teammate: Insights into human-autonomy teaming. Human Factors, 60(2), 262-273.