OpenAI the company that’s behind the incredibly viral ChatGPT AI chatbot started the rollout of an even more powerful artificial language model GPT-4 on March 14 which succeeds the technology that powered ChatGPT. The latest language model cannot just respond to queries, but also ask questions. It can also generate content from images and not just texts. OpenAI has branded the technology as multimodal referring to its more extensive set of capabilities.
That is users can ask with text, images, or execute any other methods for a response, GPT-4 will provide an in-depth answer by searching the whole internet. No doubt, the developers did put in massive hours of work to assemble the powerful language tool. The company said that it completed a raw algorithm last August but then the team worked for eight months trying to refine and improve the model.
- GPT-4 provides offers conversational search results, which makes the generated answers more immersive and easy to understand.
- It is more versatile and supports different forms of interaction such as with text, images, voice, images, etc.
- GPT-4 uses a vast dataset from across the internet, which allows for more comprehensive and accurate search results.
- Results are verified before it is presented to the user which ensures the authenticity of the information.
GPT-3.5 vs GPT-4
The previous AI language model GPT-3.5 only used text prompts to respond. The latest model is larger and also responds to image inputs, recognizing and analyzing them. If you didn’t know, GPT3.5’s response is limited to 3,000 words while it’s dwarfed by GPT-4’s enthralling 25,000-word limit. Unlike the previous model, the latest model scores 40% higher on accurate responses in certain situations. It also lets developers decide their AI program’s tone and diction. It’s also 82% less likely to return a response for disallowed content.
For an instance, OpenAI says that GPT-4 can emulate a Socratic style of exchanges and even throw questions in its responses. The previous sibling had a fixed tone and style. The latest version is versatile and more capable.
GPT-4 has beaten its previous language model in the US bar exam and the Graduate Record Examination (GRE). Greg Brockman, the president of OpenAI demonstrated that GPT-4 lets users calculate their taxes. Similarly, it was shown that the AI language model could use a crude photo and make a real one for a website.
But still, GPT-4 has limitations
GPT-4 has similar limitations as its previous one. Simply put, despite being overwhelmingly good, it is still not humanly able “in many real-world scenarios”. It doesn’t have much knowledge of events after September 2021. The AI language model also can’t learn from experience.
Who can use GPT-4?
For now, GPT-4’s availability is limited. Subscribers and developers of ChatGPT Plus can get only the text-input feature with the image-input facility not rolled out yet for the public. The premium ChatGPT was launched in February and comes for $20. It is faster in response and offers priority access. Microsoft said that some of its Bing versions were using GPT-4 before its official launch. In the meantime, you can try using ChatGPT here.
How do you feel about an even more powerful AI language model for chatbots in the wake of the ChatGPT phenomenon? Are they going to drown Google or can coexist? Do share your opinion in the comments below.
- rSIM Tech That Keeps a Backup of a SIM in Your SIM Card
- Samsung One UI 6.1 Brings Galaxy AI to More Galaxy Devices
- US telco AT&T restores network after huge nationwide outage
- Nepal Telecom to provide free WiFi at airports across Nepal
- Xiaomi 14 Ultra Launched: Expected Price in Nepal and Specs
- Redmi Note 11 Pro 4G is getting the HyperOS upgrade: Find other rollout plan
- OnePlus 12 Pre-booking starts in Nepal: Find Specs and Price