Digital Helpers for IR teams

Digital Helpers for IR teams

 

Editorial work is one of the areas of IR in which Artificial Intelligence (AI) already facilitates routine activities. Tools such as ChatGPT from OpenAI or Claude from Anthropic are used to draw up texts. Quarterly and annual reports are one relevant area. Passages about business figures and the company balance sheet are usually only reformulations of other text passages. Other types of text, such as internal and external reports, can also be automated. However, AI has long been a help when it comes to illustrating presentations.

 

 

The next step in the application of AI is the writing of press releases. An artificial intelligence fed with the last 30 or 40 press releases should be able to form at least the basic framework for the next communiqué. Looked at in detail, it is only a matter of inserting specific content relating to the news in question. The time savings for IR employees are enormous. In order to deliver permanent daily added value, the IR teams have to feed the software with new data on an ongoing basis.

 

But it is also clear: in order for AI to be of use in applications other than routine work, the technology must be refined and, above all, much better protected against external manipulation. One big weakness is that the texts formulated with the help of ChatGPT are based on a strongly subjective opinion formation. Depending on the user’s input, ChatGPT argues and sometimes invents arguments and supposed facts that sound plausible at first. Anyone who is having a particular document drawn up can therefore not assume that only the information from the source material is being reproduced. It is just as possible that external information is being added. The loophole for fake information exists because currently AI only replies in predefined cases with “Unfortunately, I can’t give any information about this”, but does not offer such an answer in principle whenever it is unaware of individual facts.

 

The black box character of AI makes it difficult to control in cases of suspected manipulation. This is especially dangerous if, as with a possible IR chatbot, the result was created on the basis of an in-house AI. Through the chat interface, malicious users can manipulate the answers – with the result that completely unintentional answers are output by AI manufacturers. For example, a chatbot was successfully manipulated in such a way that it tried to elicit sensitive bank data from users.

 

An example shows how sensitive manipulation by external perpetrators with the help of AI can be. For example , if a criminal hedge fund on social media published a deep-fake video of a CEO announcing the takeover of a competitor or drastic job cuts, the company’s stock price would react immediately. Until the company can solve the fraud, the perpetrators would be making money from the price movement.

 

For companies specialising in AI technology, closing such security gaps is a top priority. In turn, given the direct impact of the risks, IR officers are not well advised to adopt a trial-and-error approach to AI applications in their day-to-day IR work. Instead, they should stay up to date on the technical changes in AI and introduce new applications as appropriate when security standards can be guaranteed. One thing is also clear: people will remain indispensable in financial reporting as the supreme supervisory authority for the forseeable future.