Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Connect with us

Hi, what are you looking for?

BUSINESS

OpenAI’s ChatGPT bug bounty doesn’t include jailbreaking.

OpenAI provides a bug bounty for finding and reporting vulnerabilities in its AI services, including ChatGPT. Reports can be submitted via Bugcrowd for $200 for “low-severity findings” to $20,000 for “exceptional discoveries.”

The bounty excludes jailbreaking ChatGPT or making it generate harmful code or content. “Issues related to the content of model prompts and responses are strictly out of scope, and will not be rewarded,” reads OpenAI’s Bugcrowd website.

Jailbreaking ChatGPT frequently entails entering complex scenarios to evade its safety controls. For example, these may involve encouraging the chatbot to act as its “evil twin” to elicit restricted answers like hate speech or weapon-making instructions.

“Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed,” OpenAI argues. “Addressing these issues often involves substantial research and a broader approach,” the business says. Report such concerns on its model feedback website.

Jailbreaks show the larger weaknesses of AI systems, although they may not directly affect OpenAI as much as typical security failures. For example, last month, rez0 revealed 80 “secret plugins” for the ChatGPT API, unpublished or experimental chatbot add-ons. The vulnerability was patched a day after Rez0 tweeted about it.

One user commented on the tweet thread: “If they only had a paid #BugBounty program – I’m certain the crowd could help them catch these edge-cases in the future.”

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

The future of technological innovation is here. Be the first to discover the latest advancements, insights, and reviews. Join us in shaping the future.
SUBSCRIBE

You May Also Like