YOUNGSTOWN, Ohio (WKBN) – You may have seen the videos on TikTok of a convincing-looking Tom Cruise performing magic tricks and playing guitar. It’s not him, but the deepfake videos — and others like them on the internet — show how someone may be fooled by computer-generated images and video.

Now, those in legal professions are keeping a watchful eye on the technology and how it may affect court cases in the future.

Daniel Shin is the cybersecurity researcher at the Center for Legal & Court Technology in Williamsburg, Va. He studies developments in technology and the potential legal implications. For instance, the center recently began testing the use of a hologram in the William and Mary Law School’s McGlothlin Courtroom for potential uses in virtual testimony.

Shin said around early 2018, someone on Reddit released the first app used to create deepfakes, and they have been emerging since then. More recently, an AI-generated image of Pope Francis wearing a white puffy coat was mentioned in news reports earlier this year in reference to the deepfake technology.

Shin said deepfakes have gone beyond just images, however, and over the last two years, people have been experimenting with open machine-learning algorithms.

“It wasn’t just about face swaps. As people started to experiment, other types of machine learning algorithms — originally developed by researchers, mind you, for non-nefarious purposes — people now start to tinker with voice synthesis, voice mimicry, lip synchronization,” he said.

It’s not widespread — yet. Users have to have some technical knowledge to create a deepfake video, as well as access to a powerful computing source. But the potential danger to the courts is that the technology will likely be available to more people in the future.

“The danger is this: a lot of these deepfake models are able to synthesize very realistic but fake media, so at the outset, a lot of digital evidence that courts may rely on — whether they’re video, audio or visual, or like images — we’re at a point where somebody with some technical skills but who, not necessarily like an expert, can synthesis very realistic data, media that is indistinguishable, that cannot be determined whether or not it is fake or not,” Shin said.

As such, Shin and his colleague Frederick Lederer are sounding the alarm that this technology is worth monitoring. Lederer, director of the Center for Legal and Court Technology, has been speaking with lawyers and judges about deepfakes and was a featured speaker on the topic, among others, at the Ohio Judicial Conference’s Court Technology Conference in April.

Both Shin and Lederer said this hasn’t become an issue yet in the courts, but it may cause those in legal professions to take a new look at chain of custody issues.

“We are concerned enough to want our courts and our judges to be familiar with it but not concerned enough at the moment to think we have an issue of significance. That may be for the future,” Lederer stressed.

Lederer explained how evidence is currently handled.

“The law of evidence right now requires that we… authenticate a piece of evidence. The classic and simplest way to do this is what is called a witness with knowledge. So suppose now that you walked into a trial, and you were a witness, and you said, ‘Well, I saw the whole thing, and I used my smartphone, and here’s the audio-video, and that’s exactly accurate.’ That would authenticate it,” he said. “Now of course, if you were a part of this plan, you would know that it’s totally fraudulent. But under the current evidence system, unless someone was in a position to technically refute that, the audio-video would be admissible in court.”

Shin said the future may be finding a way to verify that a photo or video is real and not AI-generated by tracking when the image was taken by a particular device and that it hadn’t been tampered with.

Although the technology is new, the issue is not.

“It’s probably worth noting that the science of questioned documents is fairly recent in world history — I think, roughly late 1880s or so. So for centuries, there was always the possibility that a clever forger could forge a document, and the system survived, notwithstanding that,” Lederer said. “We may be right back to there. We won’t be able to trust whether they’re digital documents or digital audio and video.”

Mahoning County Prosecutor Gina DeGenova said her team hasn’t come across deepfakes in the courtroom yet, though she has heard of scammers using voice-replicating software to trick people into giving them money over the phone.

She said usually, their video evidence comes from police body cameras or other sources where the chain of custody can be easily verified. In addition, she said prosecutors and defense attorneys have to act in good faith and can’t submit evidence that they think was tampered with. If a video was shot by a witness, the witness would have to come in and testify that the video is credible.

DeGenova said she can see changes in the future in which digital forensics experts may be called in to testify to the accuracy of the evidence.

Lederer and Shin added that governments are aware of the issue with deepfakes and are providing guidance, while the industry has recognized it and is trying to come up with solutions.

“Yes, the tools are out, which means the Pandora’s Box is already open, but should we focus on the worst possible case scenario? No. People should cool their heads, study how the AI community is kind of going along, and I think it will require the collaboration of government, industry and academia from all different spectrums, and kind of monitor and perhaps create the next few steps on how you manage this new reality going forward,” Shin said.