World News

Is ChatGPT too clever for its own good? | News

IThis is the chatbot that helped Jeremy Hunt write a speech on economics and helps millions of others with homework, computer code, essays, poetry and business presentations.

It passed US MBA, Bar, and Medical Licensing exams, was banned from universities and even machine learning conferences, and spawned versions that can give you custom recipes, build apps, and even co-host a podcast. .

Two months after the release of ChatGPT, this almost unpronounceable AI program is still on the lips of the tech world, fueling its hype.

ChatGPT is an artificial intelligence program called a large language model. It was trained on billions of words from the internet and then perfected by humans.

Its power comes from the ability write sentences because it can accurately predict the next word to write, just like autocomplete but on a massive scale. Users can ask him questions in the hint window, and he returns the answer almost instantly.

OpenAI, the US company behind it, is now looking to create a premium version that could cost $42/month due to high demand. But in addition to feverish excitement, some call for caution and foresight.

big beasts

This week’s chatbot party award goes to Yann LeCun, Chief AI Officer at Meta. His reaction to ChatGPT? “A vivid demonstration,” he told the Big Technology podcast. “From a scientific point of view, GPT is not a particularly interesting scientific achievement.”

His argument is that the chatbot just spews out the text it was trained on and doesn’t have any understanding of the real world, so it lacks basic intelligence. According to him, most of the human intelligence is not related to the text and includes a management planning system created by evolution. None of this is captured by any AI system.

Yann LeCun, Chief Scientist at Meta, looks at ChatGPT from a competitor's perspective.

Yann LeCun, Chief Scientist at Meta, looks at ChatGPT from a competitor’s perspective.


Speaking about the business model of OpenAI, of which Microsoft is a central investor, LeCun argues that Open AI released a chatbot to please its owners. He adds that most of the technologies he uses were invented by Google, Meta and Deepmind, the artificial intelligence company now owned by Google.

If this sounds like the sour grape of a competitor, it should be noted that LeCun is a respected deep learning pioneer and former Turing Award winner. He acknowledges that ChatGPT is “very well designed” but says that Google and Meta, which have similar models, are too careful to release them.

“You can ask the question: “Why are there no similar systems, say, from Google and Meta?” he said at a conference in the US this week. “And the answer is that Google and Meta have a lot to lose by releasing systems that build things.”

He was referring to the chatbot’s habit of “hallucinating” – confidently giving wrong answers. LeCun knows this because Meta recently released Galactica, an AI conversational engine for academia, but shut it down days after users were able to create papers about the benefits of suicide, anti-Semitism, and eating broken glass.

If Meta is taking it easy on development, Google seems to be in a mini-panic. Its founders Larry Page and Sergey Brin, who left their day-to-day roles with the company in 2019, have returned. is trying to develop an artificial intelligence strategy that combats the ChatGPT threat.

Many believe that the underlying technology is the future of search, not Google’s ad-based, link-returning model. According to The New York Times, Google chief executive Sundar Pichai announced a “code red”, akin to the company’s fire alarm going off.

The emergency also looks real given Microsoft’s announcement that it will include ChatGPT in its Bing search engine. Startups like Perplexity AI and have already started building conversational search engines, albeit with mixed results.

Jeremy Hunt gives a speech about an economy helped by technology.

Jeremy Hunt gives a speech about an economy helped by technology.


showing off

While the AI ​​giants are tinkering with it, ChatGPT is hard at work learning. A professor at the Wharton School of Business in Pennsylvania applied it to the MBA (Master of Business Administration) final exam.

According to the author of the study, Christian Terwisch, he did an “amazing job” answering basic business questions on the case study, but to a lesser extent on basic mathematics and advanced analysis. Final grade? B or B minus, he says.

When Tervish turned the tables, he found the chatbot useful when preparing for exams. This also makes him a passable lawyer, having received a C+ at the University of Minnesota in the bar exam, although this would have resulted in a real student receiving academic probation. Like any good polymath, he also passed the US medical license exam.

Some scholars have taken to tagging the contributors to ChatGPT, creating an ethical headache for journals that don’t think they can take responsibility for its content. Paradoxically, one academic organization that has drawn a line in the sand is the International Conference on Machine Learning. This month, authors were banned from using artificial intelligence tools such as ChatGPT to write scientific papers.

Instructive tales

Sam Altman, one of the co-founders of OpenAI, recently admitted that he and many observers of the future of work were wrong: robots are not primarily for physical work, but for those who sit in front of computers.

Tech news and review site CNET has run into trouble this month due to its use of AI. He used a ChatGPT-like text generator to write financial articles signed by CNET Money staff. For the most part, they were “bait” – in order to rank high in search engines – and many journalists might be happy not to write them.

However, unlike the Associated Press, which began using AI in 2014 to write articles about corporate earnings, CNET did not clearly communicate this to its readers, kept most of the newsroom in the dark, and articles were full of errors and plagiarism. He has since put the project on hold, which involved many of the ethical dilemmas associated with the technology: Can it be trusted? What are its sources? What jobs will he replace?

BuzzFeed also announced that it will use OpenAI tools to create content on its website.

Gary Marcus, professor of psychology and neuroscience at New York University, has been cautioning in recent weeks. On fellow professor Scott Galloway’s podcast, Prof G Pod, he said, “People over-attribute intelligence to ChatGPT that doesn’t really exist. Some applications, I think, are viable. People really use it as a tool in computer programming, when the programmer knows what he needs, he can see if something is wrong and debug it. Then other people say, “We’re going to use it as a search engine.” It’s much more difficult because so much nonsense comes out of it.

“I don’t think it can solve the problem of truth. That’s not exactly what he does. They are made to write things that ring true. They are not created to write the truth. He doesn’t really analyze his dataset by saying, “Does what I say agree with what I know?” And this lack of a verification step is, in my opinion, fatal to making it a serious full-service search engine.”

However, for those on the edge of AI, this is still an important point. Nathan Benaich, co-author of the State of AI report and founder of venture capital firm Air Street Capital, said: “If I also look at the surface area of ​​problems that the model seems to be good enough to work on, it’s very big. Microsoft’s surface area is huge. For small and medium businesses [small and medium-sized enterprises]I think this is very important for many of the software companies I work with, especially those that are software as a service and don’t necessarily use machine learning in their products.”

The arguments will evolve along with the products. OpenAI is about to release a new version of ChatGPT 4; competitor’s chatbot, Claude of AI company Anthropic, should see the light of day; both Google and Meta have made announcements in this area.

ChatGPT hosts a podcast with Reid Hoffman, founder of LinkedIn and investor in Open AI. This is his spin when asked about his impact on jobs. “AI actually increases human creativity and productivity. Instead of replacing human workers, AI is being used to help and improve their abilities. By performing repetitive and routine tasks, AI allows people to focus on more complex and creative tasks. In addition, AI can also generate new ideas and ideas that humans might not be able to come up with on their own.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button