Deepfakes: An Emerging Risk

Spread the love

Technology advancement is a double edge sword. It brings with itself not only the development of human civilization but also the risks for society. On one hand the rise of information technology has solved many problems and has made life much easier, while on the other hand it has created new risks. Cyber threat is one such example. The degree of the growth in the information technology is directly proportional to the sophistication in cybercrime. Deepfake is one such grave concern that is going to haunt the corporates in the coming years. Deepfake made its presence felt and became subject of discussion way back in 2017. Deepfake (Deep learning + Fake), in a very simple term is a realistic fake video. It is a digitally manipulated image or videos that is created through application of deep learning. In other words, deepfakes use facial mapping technology and AI, that swaps the face of a person on a video into the face of another person (Westerlund, M. 2019. Technology Innovation & Management Review). Potential harm that this technology can do is immense. This has the potential of being massive threat to corporates and a complicated problem for underwriters.

A rising risk for corporate and public in general: 

Deepfakes can be extremly damaging to the reputation of the firm if it is used against the firm. Negative news travels much faster than any other news. Any statement or video clip can harm the reputation in a way the cyber criminals want it to be. The negative sentiments can create havoc in the stock price. Disinformation about the company using fake videos of C-suits executives can distort the image of the firm and cause reputational damage to the firm. Deepfakes / audio cloning may convince you to give the access of confidential project / confidential data to other party. Financial fraud using deepfakes is another area that corporates has to deal with- for example CEO asking to transfer large sum of money for some urgent deal. With this new weapon, cybercriminals can create images and videos of key executives in a way that can create catastrophic damage to the image and reputation, till the time it is proven to be fake. 

Protect your business from deepfakes:

Deepfakes are more than phishing attack, or a malware attack. This is a more sophisticated weapon of cyber criminals. While technology is still emerging on how to tackle the deepfake risk, some basic steps can be taken by firms now to safeguard itself.

Educating employees: Most of the firms are not very much sure about the danger of deepfakes now, so the educating them on deepfakes is missing. As we have seen in the past that extensive training on phishing, malware attack actually did help to some extent to counter the risk. Educating employees on deepfakes will help them to understand that all videos / audios may not be genuine.

Search tools to identify fake videos / images: You can search and find the fake videos to label them. Investing in the technology or partnering with technology that can identify fakes can be helpful till the time more robust technology emerge.

Advance security measures: Advance / biometric passwords can be implemented to identify the authentic users. A high security hint question and answers can be implemented to fight the fake audio where fake voice can be used to get some confidential matter.

Be more transparent with employees: Being transparent to the extent that confidentiality is not compromised, can help employees of a firm to brush away any fake negative news that they encounter. A very low level of transparency might lead the employees and other stakeholder in believing the misinformation and propaganda.

Need for deepfake / AI legislation:

Currently there is a lack of stringent regulations on deepfake. China, getting ahead of the curve, has its first regulations on deepfakes. The key highlights are the consent of the person whose image is being used, all the altered videos has to be notified in some way that it is altered and is not a real video, fake news or any content that goes against law is prohibited.

In India, there are some sections of regulations that deals with the issue of deepfake, however, it is not comprehensive. For example, section 500 of the Indian panel code has provision for punishment for the person responsible for defamation of any other person. Section 67 and 67A of IT act has provision against sexually explicit material. India also has regulations on spreading false information about individuals during election period to mislead voters. In the US, there is somewhat similar regulation that talks getting consent of the person whose image is being used, damaging the reputation of candidates to influence the election. Two bills were proposed related to this: “Malicious Deep Fake Prohibition Act of 2018”- to prohibit certain fraudulent audio-visual records, and for other purposes; and “Deep fakes accountability act”- To combat the spread of disinformation through restrictions on deep-fake video alteration technology. However, these regulations are not sufficient to act as a deterrence to organized players. We expect more regulations emerging in the coming years given the scope of deepfake.      

Deepfake videos and voice cloning is set to rise in the future and it is an emerging threat not only for corporates but also for nations. Mass awareness about deepfake as a first layer of protection, more robust regulation for AI, and technology to counter deepfake, is the need of the hour.    

About the Author

Sachin Mohan, CMA, ARM, PMP

Sachin is an experienced professional in risk and strategy management. He has been in the advisory role for multinational corporations as well as for SMEs in the area of risk, finance and strategy. Sachin is also a mentor / coach for startups. He is associated with global professional accounting body IMA, project management professional body PMI, and is also associated with leading business school Ramaiah Institute of Management (Entrepreneurship Division).

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these