OpenAI has launched GPT 4 the new version of the

OpenAI has launched GPT-4 – the new version of the neural network has become much smarter and has gained support for images

OpenAI has introduced GPT-4, the latest iteration of its large speech AI model, which demonstrates “human-level performance” in many professional tests. GPT-4 is much larger than previous versions, meaning it was trained with more data and uses more data, which also makes it more expensive to run.

OpenAI’s big GPT language model is used in many AI systems that have amazed people in tech over the past six months, including ChatGPT and AI search engine Bing. The latest GPT-4 is a preview of new advances that could seep into consumer products like chatbots in the coming weeks. The AI ​​bot at Bing already uses GPT-4, Microsoft announced on Tuesday.

The main innovation of GPT-4 was the support not only of text, but also of images as an introduction. The new version of the neural network can understand what’s in a photo, diagram, drawing, or other image and take that data into account when solving a problem. AI is also able to explain the data presented in the form of a chart. But for now, image support will be subject to a closed test.

OpenAI claims that the new model will give fewer factually incorrect answers, go insane and talk less about taboo subjects, and even do better than humans on many standardized tests. For the latter, the GPT-4 model performed better than 90% of people on the simulated bar exam, better than 93% of people on the SAT reading test, and better than 93% on the SAT math test, according to OpenAI.89% of people.

However, OpenAI warns that the new software is not yet perfect and is inferior to a human in many scenarios. According to the company, the model still has significant problems with “hallucinations” (making up facts), so is not reliable in terms of presenting facts. GPT-4 still tends to insist on being right when wrong.

GPT-4 still has many known limitations that we are working to fix, such as: B. Social prejudice, hallucinations and controversial references.the company said in a statement. In normal conversations, the difference between GPT-3.5 and GPT-4 can be subtle. The difference becomes apparent when the complexity of the task reaches a sufficient threshold – GPT-4 is more reliable, more creative and able to process much finer instructions than GPT-3.5“.

Currently, many AI researchers believe that most of the latest advances in AI involve running larger and larger models that have been trained on thousands of computer systems. Such training can cost tens of millions of dollars. GPT-4 is an example of a scale-up approach for better results.

Microsoft has invested billions in OpenAI and used Microsoft Azure cloud infrastructure to train the GPT-4 model. The developers have not released details about the specific size of the model or the hardware they trained it on and can be used to rebuild the model, citing “competitive environment“.

The new model will initially be available to paid ChatGPT subscribers and also as part of an API allowing third-party developers to integrate AI into their applications. To access the required API enroll on the waiting list. OpenAI charges about 3 cents for about 750 words of hints and 6 cents for about 750 words of response.


About the author

Robbie Elmers

Robbie Elmers is a staff writer for Tech News Space, covering software, applications and services.

Add Comment

Click here to post a comment