What is GPT-4 and how is it different from ChatGPT?
2 min read
The most powerful AI model yet from OpenAI, can tell jokes and write bar exams – but can it also cause harm?
The most potent and impressive AI model created by OpenAI, the firm behind ChatGPT and the Dall-E AI artist, is GPT-4, their most recent release. The system is capable of passing the bar exam, resolving puzzles, and even providing you with a recipe to use up leftovers based on a picture of your refrigerator. However, its developers warn that it is also capable of disseminating false information, enshrining dangerous ideologies, and even tricking people into performing tasks on its behalf. Anything you need to know about our newest AI overlord is provided below.
Describe GPT-4. The GPT-4 is primarily a text-generating device. Yet it is a really excellent one, and it turns out that writing well is practically equivalent to understanding and reasoning well.
So, for example, if you give the GPT-4 a question from the US bar exam, it will write an essay demonstrating legal knowledge; if you give it a medicinal molecule and ask for variations, it will appear to apply biochemical expertise; and if you ask it to tell you a joke about a fish, it will appear to have a sense of humour, or at least a good memory for bad cracker jokes ("what do you get when you cross a fish and an elephant? Trunks for the pool!").
Is it identical to ChatGPT?
Not exactly. GPT-4, a potent universal technology that can be tailored to a variety of uses, is the engine if ChatGPT is the automobile. Because it has been running for the past five weeks on Microsoft's Bing Talk, the one that went a little crazy and threatened to wipe out everyone, you may have already used it.
GPT-4, however, is not just limited to chatbots. Assistive technology company Be My Eyes is using the tool to create a tool that can describe the world for a blind person and respond to follow-up questions about it. Duolingo has integrated a version of it into its language learning app that can explain where learners went wrong rather than just telling them the correct thing to say.