March 15, 2023 Issue 9
Dear Bodhisattvas, greetings!
I woke up to the news that OpenAI has released the GPT-4 model, a few days earlier than expected. Here's a summary of the information I currently know.
The following is the main content of this issue, with an estimated reading time of about 3 minutes.
1. OpenAI Releases GPT-4
As soon as I woke up, I saw the email that OpenAI sent to the ChatGPT Plus users this morning. Starting today, ChatGPT Plus users can use the GPT-4 model, and the GPT-4 API can be requested through the API Waitlist.
According to Orange's summary, GPT-4 has improved its academic capabilities, surpassing over 90% of humans in various exams. It is a true multimodal model that can directly write web code from hand-drawn prototypes. It can understand the meaning of illustrations in research papers. The accuracy of English has improved from 70% to 85.5%, and the accuracy of Chinese has reached the level of GPT-3.5 in English. The accuracy of facts has significantly improved. The training data is still up until September 2021.
In terms of pricing, the model used in the API is: gpt-4-0314, but currently only pure text requests are supported (image input is still in the alpha stage). The current pricing for the API is much higher than that of the gpt-3.5 API, and both input and output are charged separately. The price for input is $0.03 per 1k token, and the price for output is $0.06 per 1k token.
Compared to the gpt-3.5 model, the gpt-4-0314 model can accept text lengths of up to 8,192 tokens and also provides a version with 32,768 tokens.
2. Performance of GPT-4
GPT-4 is a large multimodal model that supports both image and text inputs and generates text outputs.
Firstly, GPT-4 has significantly improved its performance in different languages. The accuracy of Chinese is around 80%, surpassing the performance of GPT-3.5 in English.
Secondly, GPT-4 is now far superior to most existing large language models, including achieving state-of-the-art (SOTA) performance in many fields.
Thirdly, GPT-4 has achieved near-perfect scores in several exams, such as the USABO Semifinal 2020 (United States Biology Olympiad) and GRE Writing. Most of the official data provided shows that it outperforms GPT-3.
However, while the current model demonstrates powerful capabilities, compared to previous generations of GPT models, GPT-4 still has some limitations. For example, the generated results may not always be factual, and some content may be "imagined" facts leading to reasoning errors.
3. Official Examples
The OpenAI website's introduction provides some examples of GPT-4's capabilities.
Firstly, GPT-4 can understand the content of images.
Secondly, GPT-4 can summarize research papers directly from images.
Thirdly, GPT-4 can answer questions based on the content of images.
Fourthly, GPT-4 can define the behavior of AI through the system parameter. For example, in this example, AI is instructed to answer questions in the style of Socrates, not providing specific answers but asking questions to help students think independently.
4. Ways to Experience GPT-4
There are two ways to experience GPT-4 in advance:
-
Subscribe to ChatGPT Plus
-
GPT-4 has already been integrated into Poe (along with the new AI, Claude+)
5. Updates on Paid Columns
6. Blog Updates
7. Treasury of the True Dharma Eye
"To know all things as self-mind, without attachment. To know all things as self-mind is to realize the nature of mind and accomplish the body of wisdom, without relying on others for enlightenment. To know that the three realms are only mind, and the three times are only mind, is to truly understand that the mind is boundless and limitless. To know that the mind and Buddha are the same, and that Buddha and sentient beings are the same, is to realize that both Buddha and mind are endless."
Excerpt from the "Avatamsaka Sutra"
If you enjoy the content of this issue, please help share it with your friends to support me in continuing to write.
Feel free to reply to this email or send a message to [email protected] to communicate with me.
With best wishes,
For more reading, please visit my website: Shu Cheng Leslie
© 2023 Shu Cheng Leslie