Call Us Now

+91 9606900005 / 04

For Enquiry

legacyiasacademy@gmail.com

About GPT-4

Context:

AI powerhouse OpenAI announced GPT-4, the next big update to the technology that powers ChatGPT and Microsoft Bing, the search engine using the tech, on Tuesday.

Relevance:

GS III: Science and Technology

Dimensions of the Article:

  1. About GPT-4
  2. How is GPT-4 different from GPT-3?

About GPT-4

OpenAI has announced the launch of GPT-4, a new and improved language model that surpasses its predecessor, ChatGPT (powered by GPT-3.5), in terms of speed, accuracy, and capabilities. Here’s what you need to know about GPT-4:

  • GPT-4 is a large multimodal model that can process more than just text inputs. It also accepts images as input, allowing for image captioning and analysis.
  • OpenAI claims that GPT-4 exhibits human-level performance on various professional and academic benchmarks. It can even pass the Uniform Bar Exam for aspiring lawyers in the US, with a score around the top 10% of test takers.
  • GPT-4 has a broader general knowledge and problem-solving abilities than its predecessor, allowing it to handle difficult tasks like answering tax-related questions, scheduling meetings, and learning a user’s creative writing style.
  • With the ability to handle over 25,000 words of text, GPT-4 opens up a greater number of use cases that include long-form content creation, document search and analysis, and extended conversations.

How is GPT-4 different from GPT-3?

GPT-4 can ‘see’ images now: 
  • The most noticeable change to GPT-4 is that it’s multimodal, allowing it to understand more than one modality of information.
  • GPT-3 and ChatGPT’s GPT-3.5 were limited to textual input and output, meaning they could only read and write. However, GPT-4 can be fed images and asked to output information accordingly.
  • If this reminds you of Google Lens, then that’s understandable. But Lens only searches for information related to an image.
  • GPT-4 is a lot more advanced in that it understands an image and analyses it.
  • An example provided by OpenAI showed the language model explaining the joke in an image of an absurdly large iPhone connector. The only catch is that image inputs are still a research preview and are not publicly available.
GPT-4 is harder to trick: 
  • One of the biggest drawbacks of generative models like ChatGPT and Bing is their propensity to occasionally go off the rails, generating prompts that raise eyebrows, or worse, downright alarm people.
  • They can also get facts mixed up and produce misinformation.
  • OpenAI says that it spent 6 months training GPT-4 using lessons from its “adversarial testing program” as well as ChatGPT, resulting in the company’s “best-ever results on factuality, steerability, and refusing to go outside of guardrails.”
GPT-4 can process a lot more information at a time:
  •  Large Language Models (LLMs) may have been trained on billions of parameters, which means countless amounts of data, but there are limits to how much information they can process in a conversation.
  • ChatGPT’s GPT-3.5 model could handle 4,096 tokens or around 8,000 words but GPT-4 pumps those numbers up to 32,768 tokens or around 64,000 words.
  • This increase means that where ChatGPT could process 8,000 words at a time before it started to lose track of things, GPT-4 can maintain its integrity over way lengthier conversations.
  • It can also process lengthy documents and generate long-form content – something that was a lot more limited on GPT-3.5.
GPT-4 has an improved accuracy: 
  • OpenAI admits that GPT-4 has similar limitations as previous versions – it’s still not fully reliable and makes reasoning errors.
  • However, “GPT-4 significantly reduces hallucinations relative to previous models” and scores 40 per cent higher than GPT-3.5 on factuality evaluations.
  • It will be a lot harder to trick GPT-4 into producing undesirable outputs such as hate speech and misinformation.
GPT-4 is better at understanding languages that are not English: 
  • Machine learning data is mostly in English, as is most of the information on the internet today, so training LLMs in other languages can be challenging.
  • But GPT-4 is more multilingual and OpenAI has demonstrated that it outperforms GPT-3.5 and other LLMs by accurately answering thousands of multiple-choice across 26 languages.
  • It obviously handles English best with an 85.5 per cent accuracy, but Indian languages like Telugu aren’t too far behind either, at 71.4 per cent.
  • What this means is that users will be able to use chatbots based on GPT-4 to produce outputs with greater clarity and higher accuracy in their native languages.

Can you try GPT-4 right now?

  • GPT-4 has already been integrated into products like Duolingo, Stripe, and Khan Academy for varying purposes.
  • While it’s yet to be made available for all for free, a $20 per month ChatGPT Plus subscription can fetch you immediate access. The free tier of ChatGPT, meanwhile, continues to be based on GPT-3.5.
  • However, if you don’t wish to pay, then there’s an ‘unofficial’ way to begin using GPT-4 immediately. 
  • Microsoft has confirmed that the new Bing search experience now runs on GPT-4 and you can access it from bing.com/chat right now.

Source: Indian Express


April 2024
MTWTFSS
1234567
891011121314
15161718192021
22232425262728
2930 
Categories