Google killer? OpenAI just launched a voice-controlled ChatGPT app for iPhone/iPad users
OpenAI is going mobile with a ChatGPT app that also features voice-recognition controls
Already holding the record for the fastest-growing consumer app in history, hitting 100 million users just two months after its public release, OpenAI’s sassy ChatGPT chatbot is set to expand its kingdom of human interlocutors thanks to a just-released iOS app.
On Thursday, OpenAI announced the launch of ChatGPT mobile, a free app that gives users on-the-go access to the AI-driven natural language processor, with the added benefit of voice-recognition inputs thanks to the integration of OpenAI’s open-source speech-recognition system, Whisper.
For now, only Apple iPhone and iPad users will be able to roam freely while engaging with ChatGPT but OpenAI says an Android version is in the works and will be coming “soon.” While the base model ChatGPT is driven by OpenAI’s version 3.5 of ChatGPT, the latest version 4 can be accessed through the company’s premium “Plus” subscription service that will set you back $20 per month. And, if you’re already a desktop user, OpenAI says the new app will sync accounts and keep track of saved inquiries and responses from both interfaces.
OpenAi’s app announcement also included a not-so-subtle shot across the bow for Google and other search engine operators, touting ChatGPT mobile’s ability to get “instant answers” to inquiries and “precise information without sifting through ads or multiple results.”
The tidal wave of interest that has followed the emergence of ChatGPT is of concern to Google for a number of reasons.
First, ChatGPT developer OpenAI appears to be outpacing Google when it comes to developing AI tools, even though the king of search has pumped billions of dollars into the effort. ChatGPT is just the latest iteration of a series of AI system releases from the Microsoft-backed startup OpenAI, but it was the first one that became publicly available, and for free to boot.
The advancement of AI tools also represents a direct challenge, and possible tide shift, when it comes to how we think of and use internet search engines.
Unlike a search engine response to a question, which simply points you to the answer where it already lives on the internet, ChatGPT generates its own original answers based on all the information, culled from the internet, that it has already ingested and assessed. Thus, while Google isn’t going to help you write a sonnet in the style of, say, Hunter S. Thompson, ChatGPT will easily churn that out for you and in just a matter of moments. So, staying on the cutting edge of artificial intelligence advancement is a business survival necessity, as far as Google is concerned.
To that end, Google released its own AI chatbot, Bard, to select users in February and opened public access earlier this month.
While continuing to advance at a rapid pace, and already very good at emulating human responses in both tone and content, ChatGPT and other large language model upstarts are still regurgitating output rife with mistakes, which makes a certain sense when you consider the systems rely on the collective “knowledge” of internet postings to construct responses.
In addition to systemic “garbage in, garbage out” issues, other baked-in chatbot parameters can limit the usefulness of chat-in-lieu-of-search utilities. For instance, ChatGPT’s data set is only current to 2021 data. So, if you’re looking for the latest hot restaurants, or sports scores, you’re out of luck.
While OpenAI spreads its wares to the mobile space, federal lawmakers are scrambling to figure out how to appropriately regulate the emerging tools, as some AI experts warn of potential AI-invoked catastrophes.
To that end, OpenAI co-founder/CEO Sam Altman was one of three witnesses before a U.S. Senate committee hearing earlier this week that was described as the first in a series of efforts “intended to write the rules of AI,” according to Sen. Richard Blumenthal, D-Conn. Blumenthal chairs the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law which hosted the hearing on Tuesday.
At that proceeding, Altman readily agreed with committee members that new regulatory frameworks were in order as AI tools in development by his company and others continue to take evolutionary leaps and bounds. He also warned that AI has the potential, as it continues to advance, to cause widespread harm.
“My worst fears are that we, the field of technology industry, cause significant harm to the world,” Altman said. “I think that can happen in a lot of different ways. I think if this technology goes wrong, it can go quite wrong and we want to be vocal about that.
“We want to work with the government to prevent that from happening, but we try to be very clear-eyed about what the downside case is and the work we have to do to mitigate that.”