Artificial intelligence in companies

Artificial Intelligence (AI) is able to make predictions and perform large-scale tasks quickly. In addition, it supports customer service through chatbots. Recently, it has also been the subject of intense debate. Do you know if you have the responsibility to use AI in your company?

The main question on the subject is that we cannot let decisions be made exclusively by algorithms. That is, human intervention must have a decisive weight in the functioning of autonomous tools. The way out, then, is to bet on processes such as auditing. Explicability is another means that can avoid problems in the use of artificial intelligence.

What matters is increasing confidence in AI adoption, allowing bots to act safely. Also in the management area, dealing with activities such as screening candidates and performance evaluation. Here are the effective postures that your company must adopt to deal with Artificial Intelligence.

Robots: audited and allies

Processes run by bots require auditing to gain reliability. At the same time it is worth noting that the work of audits also benefits from intelligent technology. Data processing tools allow you to assess risks and their control methods. Even big data resources are already incorporated into this activity.

As in other areas, automation gives the possibility to increase the scope of evaluation. In other words, both the scope of the audit and its frequency can increase significantly. This is how Artificial Intelligence has been changing the approach of audits. But after all, how does this relate to the responsibility required to use AI in your company?

It is important to understand, above all, that refining tests is a human task. In other words, defining rules, and adjusting them is always the role of the auditor. This care is necessary so that the tests do not become ineffective due to false positives. This is how we have a great transformation in the activity traditionally performed by these professionals.

Robots: audited and allies
Robots: audited and allies

The responsibility needed to use AI in your business

We know that artificial intelligence techniques can bring greater productivity and automation to audits. The question of responsibility comes into play through a process called "Supervised Learning". In practice, it means that auditors confirm whether the problems examined by AI are valid.

With this new filters are being added to the system, so that the occurrence of false positives is minimized. Thanks to the combination of machine learning and manual intervention we reduce this occurrence without changing the rules frequently. Over time the system will adapt more and more autonomously and assertively.

One type of supervised analysis that exemplifies this is neural networks, a common feature in fraud detection. The fact is that in the described processes the responsibility of the auditor will change along with the project. In the beginning your intervention is fundamental to direct the learning of AI. Later this action will be more linked to data collection and analysis.

Bots: automation and ethics

The methods of intervention in machine learning for auditing already give an idea of the appropriate postures to deal with AI. Care that is necessary because the same algorithms developed to identify the possibility of certain events, may fail. Errors in its design can lead to situations that harm from the company, to stakeholders.

This is why the Institute of Internal Auditors (IIA) has published a primer to warn about this. "Tone at the top" addresses precisely the methods for construction, application, management and control of artificial intelligence. In addition to being prepared for this challenge, the responsibility required to use AI in your company includes customer privacy.

In order for the relationship of bots with their audience to be adequate, it is essential to offer, first and foremost, transparency. No less important is the possibility of choice for the customer. All this must be anchored in rational security and a selective and limited collection of data. Remembering that these must be stored securely. As for sensitive data, it is essential that it is processed. After all, under no circumstances can bots clash with customers' right to privacy.

What can an artificial intelligence auditor do?

The Supervised Learning we mentioned earlier is one of the main ways to deal with the AI dilemma. It is interesting, before proceeding, to remember the case of a bot that Microsoft left on Twitter unattended. It took less than a day of interactions for him to learn how to reproduce racist messages.

The great lesson this gives us about the responsibility needed to use AI in your company is that this feature needs supervision. Filters imposed by the auditor prevent situations like the one described from occurring. A good guide for those who perform this function is the "Auditing Artificial Intelligence" (Artificial Intelligence in Audit).

The document released by the Association for Control and Audit of Information Systems (ISACA) highlights the importance of governance. It means that the process we are talking about should not be purely technical. What matters is to determine if the organization has a responsible structure for the use of data and AI.

Increasing confidence in artificial intelligence

To increase confidence in artificial intelligence, the administrative board of companies must support the measures described above. This implies granting autonomy so that auditors can act. From this they can plan and implement the procedures that give the necessary responsibility to use AI in your company.

In order to prevent biases and increase trust, it is important that data analysis becomes non-discriminatory. At the same time the consumer’s right to use this information must be guaranteed. No less important is the concern to make these systems offer justice to historically marginalized groups.

It is also necessary to indicate the factors that constitute the basis of algorithmic decision. In internal assessments, in turn, standards of fairness need to be put into practice. In the end audit processes are the best way to ensure that AI will not be used in a discriminatory or biased way.

Increasing confidence in artificial intelligence
Increasing confidence in artificial intelligence

Audits and reliability

In short, companies should be concerned with auditing, explainability, transparency and reproducibility. The whole process we have seen throughout this article aims to develop best practices, and in order to respect the controls required by law.

In addition to the audit elements, we have in explainability efforts to describe the algorithms clearly. Transparency is key in respecting customer data. Reproducibility deals with the possibility of reproducing the decision-making process of an AI, obtaining the same results.

All these processes naturally include a number of challenges. The expectation is that the coming years will be of improvement of tools for explanation and audit. Addressing these issues in advance, however, is critical if you want to achieve the accountability needed to use AI in your business.

Tripulação ETTripulação ET

Team of commanders, experts prepared to take insights from the market and transform into relevant content