OpenAI Releases GPT-4 Model that Accepts Image Input

OpenAI has released the GPT-4 artificial intelligence model in the early hours of today (Beijing time). GPT-4 is a large multimodal model that accepts both image and text input, but can only respond through text. OpenAI claims that the model is “more creative and collaborative than ever before” and “can solve difficult problems more accurately.”

OpenAI states that GPT-4 has been integrated into products of multiple collaborators, including Duolingo, Stripe, and Khan Academy.

OpenAI’s tests show that GPT-4 has performed well in several exams, including the Uniform Bar Exam, LSAT, SAT Math, and SAT Evidence-Based Reading & Writing.

GPT-4 will be available to subscribers of the paid version ChatGPT Plus (at $20 per month), and will support Microsoft’s Bing chatbot. It can also be used as an API for developers to build on, and OpenAI is accepting user applications to join the waitlist starting today.

OpenAI Releases GPT-4 Model that Accepts Image Input
(0)
techant的头像techant

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注