Eye movement, hand trembling, body posture, contradictions in narratives and much more, are elements of the live communication that can show (or hide) the sincerity. The codification of these elements, first as subject matter of the social and behavioral sciences and then as data fed to machines that design charts, has provided our wondrous civilization with, among other things, the machines of truth; or lie detectors.
Veripol is a lie detector different from the rest. In the classic example we have the cables and sensors that monitor the variations in the body (heart rate, blood pressure, respiratory rate, electrodermal activity) of whom is under investigation. The abrupt changes in any of them are considered to indicate to the interrogator that what he or she is hearing at that moment is at least inaccurate. With veripol, lie detection is based on what the inquired person says, via linguistic analysis.
This software uses intelligent language processing algorithms to find whether a statement is true or not. It essentially identifies words or phrases that, according to language studies, indicate the possibility of a false statement. For example, using too many adjectives or refusing to describe a scene is considered a sign of lying. These and many more such examples, compose the field of semantic analysis of speech by the machine, which also makes its progress in the 4th industrial revolution. (more in the next issue of cyborg magazine).
It started to be used by the Spanish police in 2018, but in practice it did not seem to have the best results. Apart from the lack of trained staff, those who used it stated that it is not that accurate. It was used to detect false accusations of theft, which is also indictable in Spain; while the final decision, they say, was made only after the admission of the one who filed the lawsuit. However, this year, there is a plan to expand its use and introduce it to the Spanish militia (guardia civil)[mnf]https://algorithmwatch.org/en/story/spain-police-veripol/[/mnf]. It seems that the general acceleration towards the new digital example favors such applications.
Certainly, the more data the machine has for training, the better and more accurate it can become; there is no doubt about it. But the question is, how far are we from a machine judging our honesty; is it perhaps already happening in a way?