Combating Deepfakes: AI, Cryptography, and Harnessing the Wisdom of Online Communities

AI image generator: Futuristic scene depicting an online community fighting deepfakes, ethereal atmosphere, diverse people jointly examining large holographic images, glowing cryptographic symbols surrounding the images, dynamic shadows cast by pixelated sunbeams, melancholic yet determined mood, chiaroscuro contrasting light effects, reminiscent of Baroque and Film Noir styles.

As AI image generators advance at an astonishing rate, detecting deepfakes has become an increasingly difficult task. Law enforcement and global leaders have been expressing concerns about the impact of AI-generated deepfakes on social media and in conflict zones. According to Marko Jak, co-founder and CEO of Secta Labs, we are rapidly approaching a time when distinguishing a fake image at first glance will be virtually impossible.

Secta Labs, an Austin-based generative AI startup established in 2022, specializes in creating high-quality AI-generated images. Users can upload their photos and turn them into AI-generated headshots and avatars. The company views users as the owners of the AI models generated from their data and considers itself as a custodian that helps create images from these models.

The potential misuse of advanced AI models has prompted world leaders to demand immediate action on AI regulation. Some companies, such as Meta, have opted not to release their advanced tools to the public due to concerns about their possible misuse. Moreover, the U.S. Federal Bureau of Investigation recently warned of AI deepfake extortion scams and criminals using photos and videos from social media to create fake content.

Jak believes that the solution to combat deepfakes may not lie in identifying them but in exposing them. AI can detect whether an image or a video was generated using artificial intelligence, which could aid in the fight against deepfakes. However, Jak also points to the AI arms race unfolding as the technology becomes more advanced and bad actors create improved deepfakes to counteract detection technology.

While blockchain technology and cryptography have often been touted as a solution for various real-world problems, Jak suggests that they could be employed to solve the deepfake problem. Cryptography could help authenticate an image’s origin and prove a practical solution since it deals with source verification rather than image content.

Jak also emphasizes the importance of leveraging the collective wisdom of communities online. He cited Twitter’s community notes feature that allows users to add context to tweets as a positive example. Jak suggests that social media companies should consider ways to harness their communities to validate the authenticity of content circulating on their platforms.

In conclusion, as the risk of deepfakes becomes more prevalent, finding solutions to combat them becomes even more crucial. AI, cryptography, and the wisdom of the crowd might play essential roles in addressing this emerging problem. However, striking the right balance between openness and responsibility will be paramount in mitigating the potential misuse of advanced AI models.

Source: Decrypt

Sponsored ad