Search
Close this search box.

Quick Concepts: GPT-4

What is GPT-4?

GPT-4 is the latest large language model (LLM) from OpenAI. It is the successor of GPT-3.5, which powers OpenAI’s ChatGPT. GPT-4 (Generative Pre-Trained Transformer, version 4) is a multimodal model. It can receive prompts via text and/or images, to which it produces responses in text form. GPT-4 was released on March 14, 2023. It is currently available for use via ChatGPT Plus and an API.

How is GPT-4 different from GPT-3.5?

According to OpenAI, GPT-4 delivers the following improvements over GPT-3.5: 

  • GPT-4 scored in the top 10% on a simulated bar exam, while GPT-3.5 scored in the bottom 10% 
  • GPT-4 is far less likely (82%) to produce disallowed content, due to extra reward signals included in the RLHF (Reinforcement Learning from Human Feedback) phase of training 
  • GPT-4 is more factually accurate (40%) and less likely to hallucinate 
  • GPT-4 is more capable of understanding complex tasks and nuanced instructions 
  • GPT-4 can respond reliably to prompts with a combination of text and images such as photos, diagrams, and screenshots 

What are the limitations of GPT-4?

GPT-4’s limitations are similar to those of GPT-3.5; however, due to improvements in general knowledge and processing abilities, these limitations have reduced in degree. GPT-4 still exhibits the following issues: 

  • Factual hallucinations and reasoning errors 
  • Biased outputs 
  • Very limited knowledge of events that occurred after September 2021 
  • Does not learn from experience 
  • May accept patently false statements from users 

As with all LLMs, carefully-written prompts are key to maximizing usefulness and minimizing errors in model outputs. 

Although LLMs in general are a work in progress (especially in terms of safeguards), GPT-4 is nonetheless poised to continue and expand the conversational AI revolution and embed itself in multiple facets of modern life.  

Accelerate AI with Annotated Data

Check Out this Article on Why Your Model Performance Problems Are Likely in the Data
ML Model Gains Come From High-Quality Training Data_Innodata