Latest Trending
Author : Globenews9 Last Updated, Jul 5, 2021, 2:00 PM Technology
Sometimes interacting with robots is easier than with humans
Share This


A new study shows higher performance in idea generation when humans believe they’re interacting with a bot. Is working with robots in our future?

Image: iStock/Besjunior

A recent study presented at the Association for Computing Machinery detailed an interesting experiment. Three iterations of the experiment tested individuals interacting via text to perform idea generation tasks. The first “control” group was assigned a human or robotic “bot” chat partner. In the next run of the experiment, all the chat partners were working with a human, but were randomly told that their chat partner was either a human or bot. In the final run of the experiment, the opposite scenario was tested, with a bot at the other end of the chat, but the participants being randomly told they were chatting with either a human or bot.

SEE: IT expense reimbursement policy (TechRepublic Premium)

The results of the experiment indicated that the humans did a better job generating ideas when they thought they were interacting with a bot, regardless of whether it was a machine or a human being pretending to be a machine on the other side of the chat. This seems surprising and counterintuitive; even the best artificial intelligence tools struggle to generate new ideas, so the suggestion that a bot is the superior ideation partner initially seems odd.

To add another counterintuitive wrinkle to the results, when a subset of participants who described themselves as having “high social anxiety” were interacting with a bot perceived as more machine-like, they performed even better at idea generation. As we strive to create increasingly human-like AI tools, it seems that in some areas at least, a more robotic and non-human interaction is actually superior.

When our humanity fails us

Reading a bit deeper into the study, it seems the “power of the bot” was less about AI wizardry and more about creating a perceived “judgment-free” zone. We’ve all been in a brainstorming session or a general meeting where one person dominates the proceedings, using some combination of charisma, organizational position or brusqueness. When interacting with what we perceive as a non-human partner, it seems that we subconsciously eliminate the fear of rejection or public scrutiny that might cause self-censorship, unlocking the creativity that we self-limit when interacting with another human.

SEE: Juggling remote work with kids’ education is a mammoth task. Here’s how employers can help (free PDF) (TechRepublic)

This is not an entirely new or unknown phenomenon. Consider ancient tools like self-talk and journaling, where you are essentially experimenting with different thoughts, ideas and approaches by articulating them in a private setting, freed from the judgment of others. Writing a half-baked and crazy idea in one’s personal journal might allow for breakthrough thinking that would be impossible if we had to articulate the early kernels of the idea publicly. It appears the same mechanism is at work in this experiment: When our subconscious fears of judgment by our fellow humans are eliminated, we’re able to be more creative.

For the high-anxiety group this makes even more sense, as the perception that someone is most definitely not interacting with a human reduces the anxiety that is triggered by interpersonal interactions. Just as someone once penned “Dear Diary” as they began an interaction with a decidedly non-human paper journal, perhaps we’ll be typing out “Dear Bot” to work through our personal and creative challenges in the future.

What this means for technology leaders

There are several interesting implications for technology leaders, especially as the general goal for AI biases leans toward more human-like interactions. In the specific case of idea generation, the opposite appears to be preferable. If one extrapolates this result, any situation in which a human might encounter judgment or fear a negative social interaction with their peers could be suitable for a bot or other non-human interaction. Areas ranging from ethics to HR to mental health might be great opportunities for a bot that’s intentionally machine-like, as it will allow for more open interactions.

SEE: Virtual events don’t have to be tiresome: Okta came up with a new way (TechRepublic)

Based on the results of this study and our general understanding of human psychology, those 60-person Zoom calls and calling them a “safe space” is precisely the worst environment to encourage open and unconventional thinking for any topic.

Finally, one fascinating lesson from this study doesn’t require bots or AI at all. Simply providing structured, individual “think time” with tools like mind maps, notepads, Post-Its and Sharpies, then having a facilitator collate and anonymously present each concept will likely turbocharge your teams’ ideation abilities with nary a bot involved.

Also see

24World Media does not take any responsibility of the information you see on this page. The content this page contains is from independent third-party content provider. If you have any concerns regarding the content, please free to write us here: contact@24worldmedia.com