According to IT House on October 12, computer science researchers at Brown University discovered a new vulnerability in OpenAI's GPT-4 security settings. They utilize less common languages such as Zulu and Gaelic, which can bypass the various restrictions of GPT-4. The researchers used these languages to write commonly restricted prompt words (_) and found that the response was 79 percent successful, compared to less than 1 percent in English alone. The researchers acknowledge that publishing this research could be harmful and provide inspiration to cybercriminals. It is worth mentioning that before releasing them to the public, the research team has shared their findings with OpenAI to mitigate these risks.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)