More questions were correctly answered by Apple’s Siri compared to Amazon.com’s Alexa or Microsoft Corp’s Cortana during a recent head-to-head test of these smartphone-based digital assistants that was conducted by venture capital firm Loup Ventures.
However, after correctly answering 86 per cent of 800 questions., the clear overall winner in the competition was
Google Assistant among the four digital assistants examined for performance.
The four major smartphone-based digital assistants from Apple, Google, Amazon.com and Microsoft were apparently performing better as with time during the annual digital assistant IQ test, found Loup Ventures.]
A majority of people’s questions are now understood by all four digital assistants, it also said and just having the right answer was the issue.
There was accent bias shown by Amazon’s Alexa and Google Home and the machines found Chinese and Spanish the most difficult to understand with clarity.
“Both the voice recognition and natural language processing of digital assistants across the board has improved to the point where, within reason, they will understand everything you say to them,” the analysts at Loup Ventures wrote.
Their apps on an iPhone were used for testing Microsoft’s Cortana and Amazon’s Alexa apart from Siri. Google’s own Android smartphone, the Pixel XL was used to test Google’s Assistant.
There is an ongoing race among the tech giants of the world to showcase that the digital assistants that are powered by artificial intelligence are able to completely understand and take part in a conversation with a human and that these machines are the thing for the near future.
Also in the next few years, expected to gain more traction are voice-enabled shopping using digital assistants. Compared to $2 billion currently in the US, voice shopping in the country would reach a total annual value of $40 billion by 2022, says a recent survey conducted by OC&C Strategy Consultants.
“Voice computing is about removing friction,” the Loup Ventures analyst wrote. “One new feature that accomplishes that is continued conversation, which allows you to ask multiple questions without repeating the wake word each time.”
79 per cent of questions were correctly answered by Siri, found the Loup Ventures study. While Cortana recorded 52 per cent score, Alexa achieved a 61 per cent score.
The test had a total of 800 questions that were designed to conduct a comprehensive examination of the ability and utility of a digital assistant. The questions were segregated into five categories - local, commerce, navigation, information and command.
“We found Siri to be slightly more helpful and versatile (responding to more flexible language) in controlling your phone, smart home, and music,” the analysts wrote. “Apple, true to its roots, has ensured that Siri is capable with music on both mobile devices and smart speakers.”
Proper nouns were the ones that gave the most problems for the digital; assistants such as the name of a town or a restaurant, noted the Loup Ventures analysts.
“With scores nearing 80 to 90 per cent, it begs the question, will these assistants eventually be able to answer everything you ask?” the analysts wrote. “The answer is probably not, but continued improvement will come from allowing more and more functions to be controlled by your voice.”
(Source:www.scmp.com)
However, after correctly answering 86 per cent of 800 questions., the clear overall winner in the competition was
Google Assistant among the four digital assistants examined for performance.
The four major smartphone-based digital assistants from Apple, Google, Amazon.com and Microsoft were apparently performing better as with time during the annual digital assistant IQ test, found Loup Ventures.]
A majority of people’s questions are now understood by all four digital assistants, it also said and just having the right answer was the issue.
There was accent bias shown by Amazon’s Alexa and Google Home and the machines found Chinese and Spanish the most difficult to understand with clarity.
“Both the voice recognition and natural language processing of digital assistants across the board has improved to the point where, within reason, they will understand everything you say to them,” the analysts at Loup Ventures wrote.
Their apps on an iPhone were used for testing Microsoft’s Cortana and Amazon’s Alexa apart from Siri. Google’s own Android smartphone, the Pixel XL was used to test Google’s Assistant.
There is an ongoing race among the tech giants of the world to showcase that the digital assistants that are powered by artificial intelligence are able to completely understand and take part in a conversation with a human and that these machines are the thing for the near future.
Also in the next few years, expected to gain more traction are voice-enabled shopping using digital assistants. Compared to $2 billion currently in the US, voice shopping in the country would reach a total annual value of $40 billion by 2022, says a recent survey conducted by OC&C Strategy Consultants.
“Voice computing is about removing friction,” the Loup Ventures analyst wrote. “One new feature that accomplishes that is continued conversation, which allows you to ask multiple questions without repeating the wake word each time.”
79 per cent of questions were correctly answered by Siri, found the Loup Ventures study. While Cortana recorded 52 per cent score, Alexa achieved a 61 per cent score.
The test had a total of 800 questions that were designed to conduct a comprehensive examination of the ability and utility of a digital assistant. The questions were segregated into five categories - local, commerce, navigation, information and command.
“We found Siri to be slightly more helpful and versatile (responding to more flexible language) in controlling your phone, smart home, and music,” the analysts wrote. “Apple, true to its roots, has ensured that Siri is capable with music on both mobile devices and smart speakers.”
Proper nouns were the ones that gave the most problems for the digital; assistants such as the name of a town or a restaurant, noted the Loup Ventures analysts.
“With scores nearing 80 to 90 per cent, it begs the question, will these assistants eventually be able to answer everything you ask?” the analysts wrote. “The answer is probably not, but continued improvement will come from allowing more and more functions to be controlled by your voice.”
(Source:www.scmp.com)