Tying in with recent developments surrounding the propagation of fake news or deepfake videos to affect political agenda, as well as mixed reactions about the fake news bill passed in Singapore, experts at Malwarebytes investigated how the use of artificial intelligence (AI) and machine learning to produce real-looking videos could not just jeopardise reputations, but could potentially become a serious cyber threat when cybercriminals misuse AI to conduct malicious activities.
As a society, we are already using artificial intelligence (AI) across a host of industries. Speech recognition. Autofill. Biometrics. Machine learning platforms. This technology train has left the station, and we will soon bear witness to its widespread adoption.
So what if someone figures out how to abuse AI applications? In recent years, we’ve witnessed the market for smart home assistants and other Internet of Things (IoT) phenomena explode, bringing along with it the attention of cybercriminals, who, with a little tinkering, quickly realized they could penetrate defenses with minor effort, as most of these devices were being shipped without privacy or security built into the design. Rewind to 2005 and ask yourself: “Could you imagine your baby monitor being used in a botnet?” It’s not such a far stretch to imagine AI being tampered with as well.
Will AI be a disruptive tech, then, in both the good and bad sense? The answer: a definitive yes. AI has already transformed from “new kid on the block” to a widely-applied science, although in some respects, it is still used as a buzzword to sell technologies, without a true understanding of how it’s being incorporated into platforms.
The report, AI Gets Awry, covers these points:
- What is AI and machine learning and how are they being used in today’s technology;
- Advantages and concerns of using AI and ML;
- Possible problems if and when AI gets implemented in malware;
- Preventative measures for organizations and consumers.
Download the report here: Download