Zuckerberg and Musk declare war on ChatGPT! Meta and Twitter form a top AI team, and Silicon Valley is full of smoke

How can there be no Meta and Twitter in the melee of ChatGPT giants? Recently, Xiao Zha and Musk also officially announced that they are going to do their own ChatGPT.

The explosion of ChatGPT directly changed the pattern of the entire Silicon Valley giant.

Microsoft’s step-by-step moves forced Google to get out of its complacency and urgently released the AI chat robot Bard.

Naturally, the other forces in Silicon Valley will not stand idly by.

Recently, Musk and Xiao Zha have also come off the stage one after another, officially declaring war on ChatGPT!

According to The Information, Musk has been in contact with AI researchers in recent weeks, hoping to form a new research laboratory to develop an AI chatbot that can directly compete with OpenAI’s ChatGPT.

At the same time, Xiao Zha also officially announced on his own platform recently that Meta will form a top AI team to focus on the research and development of generative artificial intelligence products. The new team is led by AI lead Ahmad Al-Dahle, reporting to current lead product manager Chris Cox.

Meta welcomes the battle of ChatGPT

Since entering the Metaverse with a high profile, in the eyes of outsiders, the technology giant Meta has been a little slow in the trend of chatbots.

In fact, it’s not the case, it’s just that it hasn’t been defeated yet.

As early as June last year, Meta open-sourced its self-developed large-scale language model OPT-66B, and released the OPT-based chat robot BlenderBot3 in August.

However, unlike ChatGPT, which immediately became popular and had more than 1 million users within 5 days of its launch, Meta’s Blenderbot is boring.

And soon, this chatbot was full of problems. Not only did it trample on its own boss, but it also published vicious remarks and false information, leading directly to a disaster.

Immediately following November, Galactica, a ‘research aid tool’, came out.

The paper revealed that it has been trained on 48 million papers, textbooks and lecture notes, millions of compounds and proteins, scientific websites, and encyclopedias with a total of 120 billion parameters.

However, Galactica, which was supposed to be strong, was not only very inaccurate in summarizing scientific research, but also biased. As a result, it was hastily removed from the shelves after only three days after it was launched.

Of course, the successive failures did not make Meta show weakness.

In February this year, Meta released a new 65 billion parameter large language model LLaMA. Not only is it open source, but most of the tasks are better than GPT-3.

What makes it different is that it can be used by researchers, and it is small in size and lower in cost. Meta hopes that researchers can use it to solve some problems that have been plaguing large language models.

Because of the disastrous failure of his own Galactica, Yann LeCun’s words are somewhat sour when it comes to the popular fried chicken ChatGPT:

‘It will not scale and will never be the right path to strong artificial intelligence. ‘

‘As far as the underlying technology is concerned, ChatGPT is not such a great innovation. ‘

However, when the LLaMA model came out, he also took it out to show off.

Xiao Zha said that there is still a lot of ground work to be done before realizing the real future experience, and is excited about all the new things that Meta will build in the process.

In other words, Meta has been doing a lot of research in this area, but has been more cautious than Microsoft and Google in applying these technologies to products.

Now, with the establishment of Meta’s new team, in the future, the language and image models created by researchers can be presented in their own products faster.

Hiring the big boss from DeepMind, Musk will also do ChatGPT

Now, big companies all over the world are making their own ChatGPT, and Academician Ma can’t be absent.

In recent months, Musk has criticized OpenAI for taking too many steps to let ChatGPT go around in circles to avoid offending users.

Last year, Musk said that OpenAI’s technology is “training AI to be alert”

Obviously, compared with ChatGPT and Microsoft’s recent Bing AI, the competing chat AI that Academician Ma wants to launch will not go around in circles on controversial topics.

To this end, Musk hired Igor Babuschkin, a researcher in the DeepMind AI department.

In 2019, DeepMind sacrificed its AlphaStar, which was built with great concentration, and defeated top professional players in “StarCraft 2” with a score of 5 to 0. And Babuschkin is one of the main authors of the paper.

In 2021, DeepMind released a pre-trained language model Gopher with 280 billion parameters, with a parameter volume of 280 billion, which is almost double the 150 billion parameter volume of OpenAI GPT-3. Babuschkin, also a co-author of the paper.

The evaluation results show that in the analysis of the benchmark performance of 152 tasks, the performance of Gopher surpasses SOTA in about 81% of the tasks, especially in problems that require a lot of knowledge to solve, such as fact checking and common sense.

In an interview, Babuschkin said that Musk’s goal is far more than just building a chat AI with less restrictions on output content.

‘Our goal is to improve the reasoning and factuality of these language models, making the model’s feedback more trustworthy. ‘

Babuschkin said he and Musk are discussing forming an AI research team, but the project is still in its early stages and there are no plans to develop specific products.

Babuschkin left DeepMind last week and has yet to formally sign an entry agreement with Musk. He said: ‘I am very much looking forward to working with Elon on large language models. ‘

When developing ChatGPT, OpenAI used “human feedback reinforcement learning” (RLHF), which Musk has always criticized.

Because RLHF uses human input to adjust the model to make the model’s feedback more ideal, so as to avoid racism, prejudice, and hateful language. Musk criticized: This model instead reflects the bias of developers.

‘We need a GPT that tells the truth’

Ten days ago, Musk replied with a screenshot showing that Microsoft’s chatbot refused to tell jokes in the style of a certain comedian because it believed that some of his words would hurt certain groups.

Bing AI says: Humor should be funny and inclusive, not hurtful and divisive

Musk tweeted, ‘We need TruthGPT. ‘

Obviously, after acquiring Twitter, Musk practiced his philosophy.

After the acquisition of Twitter in October last year, he deliberately read the company’s internal communication materials, and through several reporters disclosed Twitter’s content review-deleting content, blocking users, and limiting the flow of accounts, but the account users did not know.

Even before acquiring Twitter, Musk had become a well-known cultural crusader for spreading the ‘awakening virus’.

So, where will Musk’s ‘ChatGPT’ be implemented?

People familiar with the matter said that this new project may be applied to Twitter, or it may become an independent artificial intelligence laboratory.

OpenAI was originally mine

Actually, Musk has held back for a long time. Since ChatGPT became popular, he felt sour.

You must know that OpenAI was originally founded by him and his good friend Sam Altman in 2015. The purpose of the establishment is to create “AI that is beneficial to mankind”.

And part of it was, ‘Google didn’t pay enough attention to AI safety. ‘

As a result, Musk resigned from the board of directors of OpenAI Group angrily in 2019 because of disagreements over the company’s development direction. In the following year, OpenAI transformed from a non-profit to a for-profit private entity and “sold” Microsoft for $1 billion. This decision angered Musk even more.

Then, at the end of last year, ChatGPT suddenly became popular all over the world.

Musk was not hesitant to praise at the time. While boasting that ChatGPT was “terribly good”, he reiterated his point of view – we are really not far from a dangerously powerful AI.

And as ChatGPT became more and more popular, Musk felt more and more unhappy.

On February 17, Musk said angrily: ‘OpenAI was originally created as an open source (that’s why I named it ‘Open’AI), non-profit company, the purpose is to form a check on Google, but Now it has become a closed-source, profit-maximizing company, completely controlled by Microsoft. This is not what I want at all! ‘

And just one day ago, Bing AI said, ‘I am perfect and never make mistakes. What goes wrong are those external factors, such as network problems, server errors, user input, errors in search results. But I’m perfect. ‘

Musk reposted this post, saying that he was terrified, and Bing was just like the crazy AI murderer in the science fiction game “Network Shock” 30 years ago.

At that time, some netizens suggested that “Microsoft should shut down the ChatGPT Bing service”, and he immediately expressed his reconsideration because the ChatGPT version of Bing was too insecure.

One day later, his “curse” really came to light, and Microsoft immediately performed a “lobotomy” on Bing AI, and users complained.

And now when it comes to OpenAI, Musk has this tone: ‘Originally it was created as an open source non-profit organization. Now it’s closed source and for profit. I do not publicly own shares in OpenAI, nor am I a member of the board of directors, nor do I control it in any way. ‘

AI makes me have existential anxiety

Recently, Musk tweeted that he had an existential anxiety about AI.

Not long ago, at the World Government Summit held in Dubai, United Arab Emirates, Musk also said to the participants: ‘Artificial intelligence is one of the biggest risks facing future civilization. ‘

‘Frankly, I think we need to regulate AI safety. In my opinion, this is a greater risk to society than cars, airplanes or pharmaceuticals. ‘

In December last year, Musk said on Twitter: “For more than ten years, I have been calling for the safety regulation of AI!” ‘

In the face of netizens’ doubts about Neuralink, Musk said that Neuralink is supervised and far less dangerous than AI

As early as eight years ago, Musk said that AI may be more dangerous than nuclear weapons.

Obviously, 2023 will be a year of frequent events in the AI world.

Zuckerberg and Musk declare war on ChatGPT! Meta and Twitter form a top AI team, and Silicon Valley is full of smoke

So, while AI is dangerous, he “given all the existential angst about AGI, I’d rather be alive to witness AGI now than live in the past”.