Meta to Use AI Instead of Humans to Assess Risks of Its Technology and Services

0
Meta AI risk assessment

Meta AI risk assessment

Before launching its various technologies and services, Meta—the parent company of Facebook, Instagram, and WhatsApp—regularly conducts risk assessments. The company also has a dedicated team of safety experts for this purpose. Now, Meta is shifting toward using artificial intelligence (AI) instead of humans to assess risks in new technologies and services. Under this new plan, 90 percent of Meta’s “Privacy and Integrity Review” process will be handled by AI. This information was revealed in a recent NPR report based on Meta’s internal documents.

According to NPR, Meta currently relies on employees to review risks before updating its algorithms or introducing new security features. In this process, experts analyze potential social, ethical, and data-related risks associated with the technology. However, under the new plan, human involvement in these decisions is being significantly reduced.

In April, Meta’s Oversight Board expressed support for the company’s stance on allowing “controversial” speech, while also raising concerns about weaknesses in Meta’s content moderation policies and their implementation. In its statement, the board noted that since these changes are being applied globally, Meta should assess their impact on human rights. It also warned that excessive reliance on automated content detection systems could lead to uneven and unfair responses around the world.

Notably, in April, Meta discontinued its fact-checking operations and introduced a community-based verification system called “Community Notes.” Acknowledging its use of AI for assessing risks in its technologies and services, Meta stated that, initially, AI will only be used for evaluating low-risk technologies and features.

Source: Mashable

Leave a Reply

Your email address will not be published. Required fields are marked *