
This is supported by the well-known company Kaspersky Lab (specialized in searching for malicious interventions in electronic networks). They named “injections” various actions intended to feed the databases behind the complex algorithms of large language models in such a way (with such data) that when they “interact” with others, they promote their own purposes. For example, to advertise a product or service.
So far (the company says) the “injection” cases it identified were selfish but not malicious. However, since neural networks are “fed” by their users’ data, meaning they have an “open data input,” more organized hack-injections cannot be ruled out in the future. Cyber attackers show active interest in neural networks, stated Vladislav Tushkanov, head of the machine learning technology department at Kaspersky Lab.
And why shouldn’t they show active interest in this “new game in town”? There’s always some enemy lurking around to challenge the mechanical wisdom of every openAI and every google…