Check Point Research (CPR), the Threat Intelligence arm of Check Point® Software Technologies Ltd. (NASDAQ: CHKP) and a leading provider of cyber security solutions globally, warns that artificial intelligence has the potential to be a transformative technology that can significantly impact our daily lives, but only with appropriate bans and regulations in place to ensure AI is used and developed ethically and responsibly.
“AI has already shown its potential and has the possibility to revolutionize many areas such as healthcare, finance, transportation and more. It can automate tedious tasks, increase efficiency and provide information that was previously not possible. AI could also help us solve complex problems, make better decisions, reduce human error or tackle dangerous tasks such as defusing a bomb, flying into space or exploring the oceans. But at the same time, we see massive use of AI technologies to develop cyber threats as well,” says Ram Narayanan, Country Manager at Check Point Software Technologies, Middle East. Such misuse of AI has been widely reported in the media, with select reports around ChatGPT being leveraged by cybercriminals to contribute to the creation of malware.
Overall, the development of AI is not just another passing craze, but it remains to be seen how much of a positive or negative impact it will have on society. And although AI has been around for a long time, 2023 will be remembered by the public as the “Year of AI”. However, there continues to be a lot of hype around this technology and some companies may be overreacting. We need to have realistic expectations and not see AI as an automatic panacea for all the world’s problems.
We often hear concerns of whether AI will approach or even surpass human capabilities. Predicting how advanced AI will be is difficult, but there are already several categories. Current AI is referred to as narrow or “weak” AI (ANI – Artificial Narrow Intelligence). General AI (AGI – Artificial General Intelligence) should function like the human brain, thinking, learning and solving tasks like a human. The last category is Artificial Super Intelligence (ASI) and is basically machines that are smarter than us.
If artificial intelligence reaches the level of AGI, there is a risk that it could act on its own and potentially become a threat to humanity. Therefore, we need to work on aligning the goals and values of AI with those of humans.
Ram Narayanan further states, “To mitigate the risks associated with advanced AI, it is important that governments, companies and regulators work together to develop robust safety mechanisms, establish ethical principles and promote transparency and accountability in AI development. Currently, there is a minimum of rules and regulations. There are proposals such as the AI Act, but none of these have been passed and essentially everything so far is governed by the ethical compasses of users and developers. Depending on the type of AI, companies that develop and release AI systems should ensure at least minimum standards such as privacy, fairness, explainability or accessibility.”
Unfortunately, AI can also be used by cybercriminals to refine their attacks, automatically identify vulnerabilities, create targeted phishing campaigns, socially engineer, or create advanced malware that can change its code to better evade detection. AI can also be used to generate convincing audio and video deepfakes that can be used for political manipulation, false evidence in criminal trials, or to trick users into paying money.
But AI is also an important aid in defending against cyberattacks in particular. For example, Check Point uses more than 70 different tools to analyse threats and protect against attacks, more than 40 of which are AI-based. These technologies help with behavioral analysis, analyzing large amounts of threat data from a variety of sources, including the darknet, making it easier to detect zero-day vulnerabilities or automate patching of security vulnerabilities.
“Various bans and restrictions on AI have also been discussed recently. In the case of ChatGPT, the concerns are mainly related to privacy, as we have already seen data leaks, nor is the age limit of users addressed. However, blocking similar services has only limited effect, as any slightly more savvy user can get around the ban by using a VPN, for example, and there is also a brisk trade in stolen premium accounts. The problem is that most users do not realise that the sensitive information entered into ChatGPT will be very valuable if leaked, and could be used for targeted marketing purposes. We are talking about potential social manipulation on a scale never seen before,” points out Ram Narayanan.
The impact of AI on our society will depend on how we choose to develop and use this technology. It will be important to weigh the potential benefits and risks whilst striving to ensure that AI is developed in a responsible, ethical and beneficial way for society.
Comments are closed.