Google’s coming out party earlier this week for its new artificial intelligence chatbot, Bard, was besmirched by a glaring factual error generated by the natural language processor, one that was widely shared in press outreach and social media channels on Monday.

The miss, first reported by Reuters, likely played a role in a $160 billion stock value drop that started on Wednesday and continued through trading on Thursday for Google parent company Alphabet.

Alphabet stock lost 7% of its value on Wednesday and just over an additional 4% on Thursday. Even with the slip, Google continues to hold down its position as the fourth most valuable company in the world by market value at $1.2 trillion.

Google posted a short GIF video of Bard in action via Twitter on Monday, promising it would help simplify complex topics, but it instead delivered an inaccurate answer, per Reuters.

In the advertisement, Bard is given the prompt: “What new discoveries from the James Webb Space Telescope can I tell my 9-year old about?”

Bard responds with a number of answers, including one suggesting the Webb telescope was used to take the very first pictures of a planet outside the Earth’s solar system, or exoplanets. The first pictures of exoplanets were, however, taken by the European Southern Observatory’s Very Large Telescope in 2004, as confirmed by NASA, according to Reuters.

Google announced Monday that access to Bard is currently only open to selected testers but would become available to the public in “the coming weeks.”

So, what is Bard, exactly?

In its Monday announcement, search giant Google unveiled details of its artificial intelligence engine that can provide natural language responses to queries from users. The tool is similar to the ChatGPT natural language artificial intelligence engine that opened to public access last November.

ChatGPT has created a firestorm of interest and concerns since launching last November thanks to its advanced ability to construct responses to user questions and directions. Bard joins ChatGPT as members of a new generation of AI systems that can converse and generate readable text on demand based on what they’ve learned from a vast database of digital books, online writings and other media.

The tidal wave of interest that has followed the emergence of ChatGPT is of concern to Google for a number of reasons.

First, ChatGPT developer OpenAI appears to be outpacing Google when it comes to developing AI tools, even though the king of search has pumped billions of dollars into the effort. ChatGPT is just the latest iteration of a series of AI system releases from the Microsoft-backed startup OpenAI, but it was the first one that’s been publicly available, and for free to boot.

The advancement of AI tools also represents a direct challenge, and possible tide shift, when it comes to how we think of and use internet search engines.

Unlike a search engine response to a question, which simply points you to the answer where it already lives on the internet, ChatGPT generates its own original answers based on all the information it has already ingested and assessed. Thus, while Google isn’t going to help you write a sonnet in the style of, say, Hunter S. Thompson, ChatGPT will easily churn that out for you in just a matter of moments. So, staying on the cutting edge of artificial intelligence advancement is a business survival necessity, as far as Google is concerned.

Why is Bard’s error such a big deal?

Errors in ChatGPT’s output have generated plenty of its own commentary and criticism as have concerns that the tool, and others like it, can generate student essays and homework assignments in a manner that make them very difficult to discern from human-created content with current plagiarism detection software.

From an investor standpoint, Google debuting Bard with an obvious error may have only served to underscore that the search engine giant is playing a game of catch-up when it comes to advancing its own AI tools. And, the flub is likely raising new questions about whether or not Google can catch up to work OpenAI has been advancing.

ChatGPT’s emergence has also spawned countless internet rumors and conspiracies including predictions that the system puts humanity on the cusp of a “singularity” event, where a computer program transcends human intelligence, leading to all manner of unpredictable mayhem and madness.

But OpenAI CEO Sam Altman has discounted those fears on numerous occasions, pointing to both the opportunities ChatGPT’s advancements represent as well as warning against overblowing, or over-interpreting, what it all means.

“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” Altman wrote in a December tweet. “It’s a mistake to be relying on it for anything of import right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness.”