Last week, digital imaging outfit OmniVision announced the release of a new sensor chip, the first to imprint a secret watermark onto its images. The sensor, which features high resolution and wide field of view, is designed for use in security cameras. Just like watermarks on currency, a digital watermark is meant to certify the authenticity of an image or a video. However, it remains to be seen whether the new chip will be enough to reliably certify the authenticity of digital media in an era of misinformation and deepfakes.
Unlike previous digital watermarking attempts from the early 2000s, which put a visible time stamp or other metadata on a banner at the corner of the image, OmniVision’s watermark does not alter the image or video in any human-perceptible way. Instead, a secret message is embedded in randomly selected bits of the raw camera data. The bits are selected anew for each image or video frame, and the responsible circuitry is embedded on-chip. The watermark remains hidden, but preserved, even after compression to jpeg, mp4, or other formats.
“Based on our implementation, even after you do all this processing, the watermarking stays with your content, but you will not be able to see it,” says Devang Patel, emerging segment manager at California-based OmniVision.
To verify the watermark, either OmniVision, a camera manufacturer, or the owner of the camera would have to run a proprietary extraction algorithm. The algorithm extracts the embedded 32 bytes of secret information, which could contain the date and time, user or manufacturer information, or any other message. Ideally, this could be used to certify the authenticity of the media and even make it admissible as evidence in a court of law. We reached out to OmniVision’s largest competitors, Sony and Samsung, for comment and did not receive a reply.
Keeping the watermark hidden has several advantages, says Patel. First, the entire original image is preserved, so there is no risk of important content getting blocked by the watermark. This is especially important for security cameras, where the action could happen in any corner of the screen. Second, since the watermark is difficult to detect it is also difficult to reverse engineer and tamper with.
OmniVision’s new sensor chip embeds a digital watermark into the raw camera data that is undetectable to the human eye. An extraction algorithm ran on the final image reveals a pattern of red dots, comprising the watermark.OmniVision
Difficult, but likely not impossible, says Hany Farid, a professor of electrical engineering and computer sciences at the University of California, Berkeley, and digital-image forensics expert. “We have been trying to do digital watermarking for decades,” Farid says. “But the problem is when you put something into an image or video, you can also take it out. So it's going to be vulnerable.”
Farid agrees that the correct approach to battling altered or fabricated content is for the producers—such as photographers and videographers—to certify authenticity, rather than for end users (like social-media scrollers and YouTube viewers) to attempt to detect fraud after the fact. Farid sits on the steering committee of the Coalition for Content Provenance and Authenticity (C2PA), an Adobe-led group aimed at combating misleading information online. He’s also on the advisory board of Truepic, a photo- and video-verification company.
Earlier this year, C2PA released the first technical standard for authenticating the source of media content. Unlike watermarking, C2PA’s standard doesn’t embed the metadata in the image itself. Instead, the metadata is packaged together with the image and cryptographically signed. The digital signature is recorded on a ledger, blockchain or otherwise. Any modifications or edits to the image are documented, re-signed, and added to the ledger. That way anybody looking to verify an image has access to its full history. This standard has yet to be adopted by large media creators and distributors.
Just as currency has more than one security feature, digital content can benefit from all the security it can get. “I think all of these technologies are useful,” Farid says. “I like solutions that come at it from multiple directions.”
- Facebook AI Launches Its Deepfake Detection Challenge - IEEE ... ›
- What Are Deepfakes and How Are They Created? - IEEE Spectrum ›