Choose your words, and how you say them, carefully: AI is listening
·
Published in
·
3 min read
·
Just now
- -
IMAGE: Peggy und Marco Lachmann-Anke — Pixabay
Presenting a company’s quarterly results is a complex exercise, whereby the managers responsible invariably try project a positive image even when the numbers offer little grounds for optimism. We have even seen companies that, after having presented results, then announce surprise resignations or dismissals of senior executives, or even that they are bankrupt. To be sure, when managers present results, they invariably have a lot more information than they actually share with investors and analysts.
That said, given the huge number of company earnings that have been reported in the past, it is very easy to then compare those results with what happened next. In short, we have a treasure trove of data that we can now use to feed generative algorithms with the audios of these results presentations, while providing it with a conveniently tabulated summary of what happened next, the goal being to create an algorithm capable of predicting, based on what a company says to analysts — and how they say it — what might happen next.
In some preliminary analyses, researchers pursuing this idea have found, first, that they preferred to feed the algorithm using voice recordings rather than using text transcriptions, and that there were some traits we generally associate with uncertainty, such as the abundant use of filler words — the equivalent of “well…,” “so…,” “this…,” etc. — that seemed to be associated with bad omens for the company.
The idea of associating executives’ recorded words with their emotions or with what they actually convey — as opposed to what they intend to convey — is an interesting one, especially for institutional investors, who have a lot of money at stake. If they were able to identify, from the company’s earnings presentations, those traits that denote future problems, the consequences for the market could be very significant.
At the moment, this technology is still in the experimental stage, but it is already prompting some managers to try to use much more positive and enthusiastic language in order to deceive the algorithms. Of course, we are talking about very approximate results: it is not the same to listen to a recording of a person who naturally tends to be enthusiastic than to try to judge someone with a deeply…