Tencent Security Team: Google AI Learning System Has Major Security Holes
Tensor Flow, Google AI learning system, had significant security flaws. When using this system to edit the AI scenarios, it is likely to suffer malware attacks, according to Tencent’s Blade Team, the company’s security platform department.
SEE ALSO: Tencent Sets its Sights on Order as it Turns 20
Yang Yong, the head of Tencent‘s security platform, said TensorFlow is a free programming platform provided by Google for the AI designers, which allows programmers to design AI components on the platform. When the code containing security risk was edited into AI scenarios such as facial recognition or robot learning, attackers can use the flaw to take over the system permissions, steal the design model, infringe user privacy, and even cause greater harm to the users.
“Generally speaking, when programming robots, if the designers accidentally use codes containing such holes, then malicious attackers could control the robot via the security hole, which is very terrifying,” he said, “We are just taking a small step forward in the AI security field, and expecting more technicians to work together to improve AI and to make AI safer.”
Blade team said Tensor Flow is currently one of the most widely used machine learning framework, which has been applied in many AI scenarios, such as automatic speech recognition, natural language understanding, computer vision, advertising, unmanned driving, etc. Once it is controlled by hackers, the consequences can be very dreadful. At present, Blade team has written a letter to Google, requiring Google security team to re-edit the code.
Some industry experts have expressed concerns. Zhang Wei, deputy director of Shanghai Information Security Trade Association, said that currently, all enterprises engaged in artificial intelligence development combine algorithm with data. As some data may involve the core secrets of enterprises and users, once security holes appear, the risk is quite high.
Zhang suggested that relevant enterprises should check whether they had used this platform for AI programming and the industry should strengthen communication to eliminate security risks.