Timp de citire: 10 minute

Throughout human history, societies have been resistant to change. The change involves adaptation and comes always with a cost. Giving up the old in favour of the new is a continuous process and part of the evolution. Sometimes, the transition can hide also traps. Newer is not necessarily better.

That is the case with AI. Like any other step, AI comes with advantages and disadvantages (read the previous article here), so that makes both users and developers face challenges when adopting AI. It is also the case for the industry – on one hand, introducing AI could be expensive, on the other hand, nobody wants to be left behind.

Here is a summary of the challenges of AI:

Analyzing and understanding the challenges of new technology could be decisive for its future. Adopting cutting-edge technologies was always a common effort of the community, to prepare transitions, adapt hardware, upgrade software, educate users or attract funds.


Let’s review all of the challenges, one by one, and discuss them more profoundly.

The quality and quantity of data

Go back to the List ↸

As we know, the quality of the AI system relies heavily on the data that’s fed to the system. AI systems require massive training data sets. AI systems learn from the available information in a similar way as humans do. But for identifying patterns these systems require more data than humans. [1]

The speed of humans when it comes to analyzing data cannot even be compared with that of a machine. Machines are also fast learners, better than any human can be.

Here is tricky – the better the data you give, the better the outcomes an AI will produce. In general, users need to know what data they already have and must compare that to what data the model needs. For that, users must know what model they will be operating on. Otherwise, they won’t be able to specify the data type that is needed.

Once the data is classified or structured, the user may get the information about what they already have, and what they’re missing. The missing parts will be some publicly unrestricted information or data from third parties. Some missing data may be still difficult to obtain, and for that, the developers need to be mentally prepared, that not all types of data are easily available. In such cases, synthetic data is created artificially based on real data. This approach may be used when not enough data is available to train the models.

Another way to acquire the missing data is to either use open data as an addition to your data set or use the Google Data Set search [2] to get missing data to train the model. There are multiple algorithms and tools that are used to work on the data set that contains missing data. [3]

Some other alternatives for data set search:

An important aspect to keep in mind is that low data quality may cause poor decisions, compromise the system or make it a target for cyber attacks.

When it comes to training the models, the size of the input data sets matters. As a consequence, this can lead to a lot of mechanisms of data trafficking. And so, it appears the race for data collecting and the value of data. This is why, sometimes users can become products themselves, even if the platforms they use are apparently free. -> Read more in the section below, Privacy and security.

Limited support and trust

Go back to the List ↸

The difficulty with AI is that it is like a black box for users, which makes them uncomfortable when they don’t understand how the decisions were made.

From the technical point of view, this is something quite common, that engineers work every day with, and mechanisms to handle these are widely developed and used. But for other categories of professionals, relying on a black box may be more challenging and that may generate concerns and trust issues. Find out more about the AI black box problem here. [4]

This is similar to banks using simple algorithms that are based on linear maths; it is much more simple to describe the way they work and how they generate and output based on an input. But if we use AI for such processes, it is quite challenging and time-consuming for a non-technical professional to understand the complex algorithms in processes of AI, without prior training in this regard.

Consequently, AI has not been able to build trust among people. The only way to solve such matters is to let people see how technology works. And of course, this takes time and it is costly as well. And generally, the increase in cost leads to a decrease in support.

Also, AI implementation does not have enough use cases in the market. And without it, no organization would be interested to invest money in AI-based projects. That means that there have been comparatively few organizations interested in putting money into the development of AI-based products. [5]

In addition, there are not enough people who know how to operate machines, which think and learn by themselves. As discussed in the previous article, a disadvantage of AI may be unemployment. People may lose jobs because they are replaced by machines; on the other hand, new jobs would be created and people need to get training in order to be able to practice them. Alternatives to such cases is a shift towards offering platforms and tools that permit AI-driven work as a service. This enables organizations to take ready-made solutions and plug in their own data, rather than starting everything from scratch – but then we return to the AI black box problem.

Integration

Go back to the List ↸

Integrating AI into existing systems is a process that is more complicated than adding a plugin to the browser. The elements and interface needed to address the business needs have to be set up. Also, the developer needs to consider data infrastructure needs, labelling, data storage and feeding the data into the system. Model training and testing are done to test the effectiveness of the developed AI, creating a feedback loop to improve continuously models based on people’s actions and data sampling to reduce the amount of data stored and run models more swiftly while still producing accurate outcomes. [6]

In order to overcome possible integration challenges, users need to make sure that everyone has a clear understanding of the process. This will require vendors to have broader expertise not limited to building models in the field. Once the implementation is done in a strategic manner and carried out in a proper step-by-step manner, the risk of failure is mitigated. [6]

After successfully integrating AI into systems, users will have to train people to use the model and should provide constant guidance about the model and suggest ways to develop AI further if it’s applicable. [6]

Having proper data infrastructure or making sure that the software has the data it needs is an ongoing process. It happens on a real-time basis in order to really be able to help the company.

The bias problem

Go back to the List ↸

The biggest problem with AI is that these systems can show biased results when written by developers with biased minds [7] (especially when we are talking about the black box cases).

The results generated by an AI engine are as good (or as bad) as the data they are trained on.

Bad data is often laced with gender, racial, ethnic, and communal biases. [8] Then, based on these data, the algorithms are generally used to determine further decisions, based on initial conditions, such as which passenger gets a car (for ride-sharing services), or what would be the next recommended connection on a social network (Tinder, Facebook, Linkedin etc.). In other cases, these algorithms could be used for psychological assessments for a company that decides which candidate should be hired. And so on…

If the bias lurks in these algorithms, that could lead to corrupted conclusions and further to unfair decisions and consequences.

It is usually speculated (or marketed) that AI “is conscious” and can make its own choices. In reality, machines make decisions based on the input data they get. It quite is reasonable and common sense that this is happening even in humans’ cases. But AI doesn’t have opinions, emotions or concerns – they simply learn from the opinions of others. And that’s where the bias happens.

It is not like in humans case bias does not exist. On the contrary, it is even more unpredictable and people are good at hiding this. But the society already developed multiple mechanisms to prevent or sanction human bias. That was not possible yet for AI since the technology is too new and that’s causing challenges in adopting AI.

Bias can occur in different contexts, as a result of numerous factors, like the way data is collected, or the way data is probed. We will always have a limited capacity of collected data and for example, if we use a survey in a certain community, the results will reflect more or less the community’s opinion rather than being fully scalable to other communities that live in different contexts.

The thing with bias is, the negative impact is not observed instantaneously, but in the future after models are trained using not only biased data but bad debt as well.

For that, it is highly needed to train these AI systems with unbiased data and develop various algorithms that can be easily understandable and explainable. [9]

Companies like Microsoft are developing tools that can automatically identify biased situations and a series of artificial intelligence algorithms. It is a remarkable step for automating the detection of unfairness in machine learning algorithms and a great opportunity for corporations to leverage AI without discriminating against specific groups of people. [10]

Privacy and security

Go back to the List ↸

Most of the AI applications are based on a massive quantity of data for learning and making intelligent decisions. Such systems depend on data, which is often sensitive and personal in nature. These systems learn from the data and improve by themselves due to this systematic learning the systems become prone to data breaches and identity theft. [5] Certain steps are taken after looking at the increasing awareness and customers regarding the increasing number of machine-made decisions.

A remarkable method known as federated learning is used that encourages data scientists to create AI without affecting users’ data security and confidentiality. I recommend you to learn more about federated learning from here [11] and here [12].


In order to prepare this article, I studied the lectures from GlobalTechCouncil. They served me as a great source of inspiration and guidance through the process of discovering and understanding AI technologies.

Below, you can also find some valuable materials on this topic.

References

[1] https://neoteric.eu/blog/12-challenges-of-ai-adoption
[2] https://datasetsearch.research.google.com
[3] https://towardsdatascience.com/all-about-missing-data-handling-b94b8b5d2184
[4] https://www.thinkautomation.com/bots-and-ai/the-ai-black-box-problem
[5] https://towardsdatascience.com/artificial-intelligence-opportunities-challenges-in-businesses-ede2e96ae935
[6] https://neoteric.eu/blog/12-challenges-of-ai-adoption
[7] https://www.bbntimes.com/technology/what-are-the-biggest-challenges-in-artificial-intelligence-and-how-to-solve-them
[8] https://www.forbesindia.com/blog/business-strategy/artificial-intelligence-key-challenges-and-opportunities
[9] https://www.ericsson.com/en/blog/2021/11/ai-bias-what-is-it
[10] https://www.technologyreview.com/2018/05/25/66849/microsoft-is-creating-an-oracle-for-catching-biased-ai-algorithms
[11] https://en.wikipedia.org/wiki/Federated_learning
[12] https://ai.googleblog.com/2017/04/federated-learning-collaborative.html
[13] https://www.upgrad.com/blog/top-challenges-in-artificial-intelligence