Understanding the Difference Between Artificial Intelligence (AI) and Complex Technology

Artificial Intelligence (AI) and Complex Technology

Most of us loosely use the term Artificial Intelligence (AI) for any complex technology that we come across in day-to-day life.

Just because we do not understand or cannot decipher how a particular complex system is able to make certain decisions, it is wrong to denote such complex systems as ‘intelligent systems’. By taking certain decisions, they won’t become intelligent.

The four key elements of AI are:

  • Learning
  • Reasoning
  • Problem-solving
  • Perception

When should you use Artificial Intelligence (AI) for representing a system?

A complex technology can be referred to as artificial intelligence only if:

  • It is capable of making decisions by drawing conclusions or gaining knowledge from previous experiences.
  • There is no interference from a human being.
  • The involvement of a human is only limited to the extent of creating the internal structure of the complex system.
  • A human is only involved in indirectly influencing the AI outcomes by choosing the training model and learning system.

Consider the following two examples:

Example 1:  If you want to make 2 packets of popcorn, the smart microwave oven will set the perfect time and move the vessel when the popcorn is made.

Example 2: The lights outside your home automatically turn on in the evening.

  • These may appear to be intelligent systems at first, however only the engineers who have actually constructed the systems would be able to clearly tell these are only created by smart and intelligent humans, which enable them to make smart decisions.
  • The examples mentioned above cannot be tagged as ‘intelligent systems’ since they are following a specific path as instructed by humans.

How to differentiate between what actually should be referred to as Artificial Intelligence (AI) and what shouldn’t be?

  • If you want to find out about intelligent systems that are actually smart and capable of making their own decisions or not, it is essential to contact the person who actually developed the complex system.
  • For complex systems that don’t fall under the category of Artificial Intelligence (AI), they are just following some instructions from humans.
  • If the system failed in certain tasks, we can actually check what went wrong with the coding or find out the part that did not function properly.
  • In the above two examples, the written codes by the programmers alerted the smart microwave to remove the popcorn.
  • Also, the lights outside your house automatically turn on when someone is at your doorstep. That someone may be even a stray cat or a dog.
  • In case the system fails to work for all possible cases, it may be programmed again even for unforeseen situations not covered before.

Is a human being responsible for Artificial Intelligence (AI)?

  • A robot trained with several examples of how-to pick-up objects breaks an egg on lifting it.
  • The reason behind the breaking of this egg is that the trained robot is applying too much force.
  • If the intelligent system was trained using different data sets of picking various objects (cricket ball, tennis ball, baseball, etc.), how does the egg break when it is picked up by the robot?
  • The intelligent system may have created a certain internal representation based on different objects which are being used to train it.
  • Also, an AI learning technique called reinforcement learning may have been used to train the system in which the trainer (a human) provided feedback about the outcomes of different activities.
  • If the robot picks up the egg but happens to break it when picking it up, it can be a result of wrong training or to be more specific, using the wrong training material, i.e. the robot has been trained only using different balls.

In this case, the human can be blamed for choosing the wrong training material.

  • If there was no external visible error made, it is impossible to debug the intelligent system further to find out who was at fault.
  • If we talk about a simple learning system, it can be found out what was learned using various techniques.
  • But, for complex systems, it is not possible.
  • In the example of the robot and egg, the programmer of the learning system can be held responsible only and only if they promised that their learning system would learn how-to pick-up the egg, just like the different types of balls.
  • In reality, the designer of the training program is responsible, since the training model was too generic for objects in general and not specifically for the egg.

Can artificial intelligent systems be biased-free?

  • In the above example, it was seen that there are certain biases attached towards other objects, which completely overlooked eggs in the training dataset.
  • Biasness of AI outcomes is a big concern, especially when used by businesses and brands.
  • What if Artificial Intelligence (AI) distinguishes or is biased with respect to religion, gender, age, race, caste, or any other parameter?
  • It is next to impossible to make use of all possible scenarios in training data for an intelligent system.
  • There is also no guarantee that the intelligent systems won’t take a wrong decision in certain unforeseen circumstances.

It is important to keep in mind that intelligent systems make use of training data or knowledge through previous observations to create an internal representation to make new decisions.

These observations and internal representations are not made by a human being.

The intelligent system powered by Artificial Intelligence (AI) creates the internal representation.

The human being is only indirectly involved by having a control on the learning system and training data examples that were used.

 

Leave a Reply

Your email address will not be published. Required fields are marked *