top of page
ch6682

SKIN DEEP WITH AI




SKIN DEEP WITH AI

 

PHOTOSHOPPING FOR THE 21ST CENTURY

 

Have you ever seen Mark Zuckerberg boast about having “complete control over billions of people’s stolen data” or witnessed Jon Snow’s heartfelt apology for the disappointing conclusion to Game of Thrones? If so, you’ve encountered a deepfake. The 21st century’s answer to Photoshopping, deepfakes use a type of artificial intelligence known as deep learning to create images of fabricated events, hence the name "deepfake". Want to put new words in a politician’s mouth, star in your favourite film, or dance like a professional? Then it’s time to create a deepfake.

 

WHAT ARE DEEPFAKES?

 

Deepfakes rely on neural networks that analyse large sets of data to learn how to imitate a person’s facial expressions, mannerisms, voice, and inflexions. The process involves inputting footage of two individuals into a deep-learning algorithm to train it to swap faces. In other words, deepfakes use facial mapping technology and AI to replace one person’s face in a video with that of another. Deepfakes are difficult to detect, as they use real footage, often include realistic-sounding audio, and are designed to spread rapidly on social media. As a result, many viewers assume the video they are watching is authentic.

 

INFOPOCALYPSE

 

Deepfakes primarily target social media platforms, where conspiracies, rumours, and misinformation spread easily, as users often follow the majority opinion.

At the same time, an ongoing 'infopocalypse' leads people to believe they cannot trust any information unless it comes from their social circles, such as family members, close friends, or relatives, and aligns with the opinions they already hold. Many are willing to accept anything that supports their existing beliefs, even if they suspect it might be fake.

 

HOW DO DEEPFAKES WORK?

 

An Autoencoder is a specialised type of neural network designed to replicate its input as its output. For instance, when given an image of a handwritten digit, an autoencoder first compresses the image into a lower-dimensional latent representation, then reconstructs the latent representation back into an image. It learns to compress the data while minimising reconstruction error. In this case, two autoencoders share a common encoder that can "interpret" either a face of Mark Zuckerberg or a face of Mr. Data. The objective is for the encoder to use the same representation for features such as head angle or eyebrow position, regardless of whether it’s processing a photo of Mark Zuckerberg or Mr. Data. This means that once a face has been compressed by the encoder, it can be reconstructed using either decoder.

  

THE ARTIST AND THE ART CRITIC

 

Generative Adversarial Networks (GANs) are among the most fascinating concepts in computer science today. Two models are trained simultaneously through an adversarial process. A generator ("the artist") learns to produce images that appear realistic, while a discriminator ("the art critic") learns to differentiate between real images and fakes. Throughout the training, the generator gradually improves at creating lifelike images, while the discriminator becomes more adept at distinguishing them. The process reaches equilibrium when the discriminator can no longer tell the difference between real images and those generated by the model.

 

WHAT ARE DEEPFAKES USED FOR?

 

  • Impersonating public figures - Deepfakes have been used to create fabricated videos of politicians, celebrities, and business leaders, leading to reputational damage and misinformation campaigns. For instance, in Kendrick Lamar’s music video The Heart Part 5, the singer’s face morphs into that of the late Kobe Bryant, creating a deepfake so convincing it appears as if Bryant is performing, despite having passed away two years earlier. In 2018, BuzzFeedVideo produced a realistic deepfake of President Barack Obama, mimicking his voice and gestures. This quickly raised ethical concerns, even though the video intended to highlight the potential dangers of disinformation created by deepfakes.

  • Business fraud - Fraudsters can manipulate audio and video recordings to deceive employees or customers, allowing them to gain unauthorised access to sensitive information or conduct fraudulent transactions. In one instance, AI voice cloning technology tricked a bank manager into authorising wire transfers totalling $35 million. Additionally, an AI hologram was used to impersonate the COO of one of the world’s largest cryptocurrency exchanges during a Zoom call, resulting in a business losing all of its liquid funds.Identity theft - Deepfakes can be used to bypass traditional identity verification processes, enabling cybercriminals to impersonate others and gain access to restricted resources or commit financial fraud.

  • Social engineering attacks - Attackers can create convincing deepfake personas to manipulate individuals into revealing confidential information or taking actions that compromise their security. In July 2023, a deepfake scam used the likeness of financial services influencer Martin Lewis, using his face and voice in a video circulating on social media, encouraging people to invest in an app falsely associated with Elon Musk’s ventures. Lewis had no connection to the video and warned his followers to beware of the scam. In 2021, a cybercriminal created a deepfake of Dubai Crown Prince Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum to solicit funds from unsuspecting individuals. These incidents underscore the need for advanced detection methods to combat the misuse of deepfakes for fraudulent purposes.


 TIMELINE OF DEEPFAKE INFAMY

 

THE ELON MUSK DEEPFAKE SCAM

 

In August this year, all Steve Beauchamp wanted was some money for his family, and he believed Elon Musk could help. Beauchamp, an 82-year-old retiree, had seen a video late last year featuring Musk endorsing a radical investment opportunity that promised quick returns. He got in touch with the company behind the scheme and opened an account with $248. Over several weeks and multiple transactions, Beauchamp drained his retirement savings, ultimately investing more than $690,000. Then, the money disappeared – lost to digital scammers operating at the cutting edge of a new wave of crime fuelled by artificial intelligence. The scammers had taken a genuine interview with Musk and manipulated it, using AI tools to replace his voice with a replica. The AI was so advanced that it could subtly adjust Musk’s mouth movements to match the new script they had written for the fake video. To a casual viewer, the deception would have been almost impossible to detect.

 

HOW TO SPOT A DEEPFAKE

 

As deepfake videos become harder to detect, cybersecurity providers are improving their recognition algorithms. However, you don’t need to be an expert to spot a fake video. Start by checking the source. Trace where the video was posted and by whom. For example, the viral Tom Cruise deepfake came from @deeptomcruise, not the actor himself. Assess whether the source is legitimate, such as a news outlet or a fan account.

Use search engines to verify images. Take a screenshot of the video and upload it to Google Images or Bing for a reverse image search. Check the image's history and see if it has been used elsewhere or manipulated. Fact-check by comparing the video with reliable sources. Trust your instincts but also rely on credible information. Don’t share unless you’re sure the video is real.

 

Here are some signs of a deepfake:

 

  • Unnatural body movements - Look for awkward or uncoordinated gestures, or if the head and body don’t align.

  • Odd colouration - Watch for inconsistent lighting or strange skin tones.

  • Strange eye movements - Check if the eyes blink unnaturally or not at all.

  • Awkward facial expressions: See if the face and emotions match the conversation.

  • Unnatural teeth/hair - AI struggles with details of teeth and perfect hair with no flyways.

  • Inconsistent audio - Look for mismatched mouth movements or odd background noises.

  • Blurry alignment - Watch for edges that are out of focus or frames that seem misaligned.


ARE ALL DEEPFAKES BAD?

 

Deepfake technology also has positive applications across various industries, including film, education, digital communications, gaming, social media, healthcare, material science, and business sectors like fashion and e-commerce. In the film industry, for example, deepfakes can be used to create digital voices for actors who have lost theirs due to illness or to update footage without the need for reshoots. More controversially, AI can bring deceased actors back to the screen. For instance, the late James Dean is set to star in Finding Jack, a Vietnam War film.

 

DEEPFAKES AND CYBERSECURITY

 

In the field of cybersecurity, deepfakes have been utilised in various deceptive practices, particularly as a means to advance social engineering attacks like spear-phishing. A common tactic involves creating video or audio clips that mimic corporate executives or public figures. Another approach sees voice deepfakes used to authenticate fraudulent requests over the phone, deceiving employees into divulging sensitive information or making unauthorised transfers. Instances of fraud involving deepfakes have surged by 3,000% over the past year, especially in the financial sector. In one recent case, deepfakes were employed to simultaneously impersonate a CFO and other executives from a multinational company during a conference call, tricking a finance employee into transferring over $25.6 million.

 

STRATEGIES FOR DETECTING AND MITIGATING DEEPFAKE RISKS

 

Here’s how various practical technologies and methodologies can be employed to detect and mitigate the risks associated with deepfakes:


  • AI-Driven Anomaly Detection Systems: AI-powered anomaly detection systems utilise machine learning, including deep neural networks, to identify subtle manipulations in digital media. Trained on extensive datasets, these systems detect pixel-level anomalies and can be integrated into cybersecurity frameworks to flag potential deepfakes automatically.

  • Preparation and Awareness Training: Invest in training for employees, especially those in roles vulnerable to targeted attacks (e.g., financial officers, HR). Regular training sessions can raise staff awareness of the nuances of deepfakes, improving their ability to recognise generative AI-based phishing attempts and respond effectively to potential threats.

  • Incident Response Plans: Develop a comprehensive deepfake incident response plan, including protocols for the swift identification, containment, and analysis of suspected deepfake content. This plan should seamlessly integrate into the broader cybersecurity incident response framework.

  • Operationalise Zero Trust: Incorporating Zero Trust can significantly strengthen defences against deepfakes by enforcing stringent access controls and robust data protection measures such as segmentation, Multi-Factor Authentication (MFA):  and Least Privilege Access Control.

  • Collaboration and Sharing of Intelligence: Strengthen defence capabilities by engaging in partnerships and participating in industry-wide initiatives to share intelligence on emerging deepfake techniques and threats.

  • Legal and Regulatory Compliance: Ensure adherence to laws and regulations relating to digital content verification and privacy. Collaborating with legal teams can aid in understanding liabilities and responsibilities in managing deepfake incidents.

 

By adopting these advanced detection methods and incorporating them into comprehensive cybersecurity strategies, organisations can significantly enhance their resilience against the sophisticated threat posed by deepfakes. This proactive approach is crucial in safeguarding the integrity of information in an era where seeing and hearing can no longer be trusted blindly.

 

CYBER LONDON AND DEEPFAKES

 

Here at Cyber London, we are committed to keeping everybody safe and secure online and aim to curtain online harm as much as possible. We realise that the threat posed by deepfakes is evolving, necessitating both awareness and action from all sectors involved in cybersecurity. Organisations must remain proactive, not only by implementing advanced technological defences but also by cultivating a culture of continuous education and regulatory awareness. It is essential that seeing and hearing no longer equate to believing - we must question, verify, and defend against the manipulative potential of deepfake technologies. Your vigilance and readiness to adapt are your strongest defences against this emerging form of digital deception. Join us as we fight the good fight against malicious deepfakes.

0 comments

Recent Posts

See All

Комментарии


bottom of page