Vaccine is the best way to protect our body from unpredictable viruses and bacteria that could develop serious diseases such as measles, mumps, or rubella. But do you know that vaccine is not only needed by human but also AI and other automation such as machine learning? Wait, why do the robots even need vaccination?
See also: 10 Macro Trends in 2019 Every Executive Should Know
Our industry today has become more autonomous than ever. A survey by MIT showed that almost one-third of businesses have adopted any level of machine learning automation. In 2017, 51 percent of organisations surveyed, implemented, or were expanding their use of AI over 40 percent, said Diego Lo Giudice, VP and principal analyst at Forrester. While in 2016, only 40 percent companies adopted automation – implying that AI has become a new norm in the workplace.
However, while technology holds enormous potential to positively transform our world, artificial intelligence and machine learning are vulnerable to a cyber attack. Cyber attack can trick your algorithms into misinterpreting training data which leads to disastrous events. Even for a smart machine such as artificial intelligence systems, sometimes they can still get confused by hackers with their so-called adversarial attacks.
Adversarial attack is input to machine learning models that attacker has intentionally designed to cause the model to make a mistake. The attack is just like optical illusions for machines. It is a technique employed to fool machine learning models through input of malicious data causing them to malfunction. An extreme example of adversarial attacks can be seen in a self-driving car. AI system of a self-driving car can easily be tricked into thinking that a stop sign is actually a speed limit sign. Thusly, it will not only harm the car driver but people around them.
In an office, the adversarial attack could cause machine learning to misinterpret data. Researchers from CSIRO’s Data61 reported that over the past years, classifiers can be extremely sensitive to small changes in inputs. Thusly, changes can be made easily and it can affect and/or damage a machine learning.
Therefore, a vaccine is needed to prevent the disaster from happening. The CSIRO researchers have developed a world-first set of techniques to effectively vaccinate against adversarial attacks. They have created a sort of AI vaccine that trains algorithms on weak adversaries to better prepare for the real thing. The scientists are working to develop a way to distort training data fed into an AI system. Thusly, AI will not be easily fooled later on.
“We implement a weak version of an adversary, such as small modifications or distortion to a collection of images, to create a more ‘difficult’ training data set,” Richard Nock said. So, when an algorithm is trained on data exposed to a small dose of distortion, the resulting model is more robust and immune to adversarial attacks. Additionally, this new technology is hoped to spark a new line of machine learning and ensure positive use of transformative AI technologies.
Read also: Market Forecast to Automotive Human-Machine Interface & Its Application