Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. Leading AI textbooks define the field as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term “artificial intelligence” is often used to describe machines that mimic “cognitive” functions that humans associate with the human mind, such as “learning” and “problem solving”.

HOW DOES ARTIFICIAL INTELLIGENCE WORK?

Can machines think? — Alan Turing, 1950

Less than a decade after breaking the Nazi encryption machine Enigma and helping the Allied Forces win World War II, mathematician Alan Turing changed history a second time with a simple question: “Can machines think?” 

Turing’s paper “Computing Machinery and Intelligence” (1950), and it’s subsequent Turing Test, established the fundamental goal and vision of artificial intelligence.   

At it’s core, AI is the branch of computer science that aims to answer Turing’s question in the affirmative. It is the endeavor to replicate or simulate human intelligence in machines.

The expansive goal of artificial intelligence has given rise to many questions and debates. So much so, that no singular definition of the field is universally accepted.  

The major limitation in defining AI as simply “building machines that are intelligent” is that it doesn’t actually explain what artificial intelligence is? What makes a machine intelligent?

In their groundbreaking textbook Artificial Intelligence: A Modern Approach, authors Stuart Russell and Peter Norvig approach the question by unifying their work around the theme of intelligent agents in machines. With this in mind, AI is “the study of agents that receive percepts from the environment and perform actions.” (Russel and Norvig viii)

Norvig and Russell go on to explore four different approaches that have historically defined the field of AI: 

  1. Thinking humanly
  2. Thinking rationally
  3. Acting humanly 
  4. Acting rationally

The first two ideas concern thought processes and reasoning, while the others deal with behavior. Norvig and Russell focus particularly on rational agents that act to achieve the best outcome, noting “all the skills needed for the Turing Test also allow an agent to act rationally.” (Russel and Norvig 4).

Patrick Winston, the Ford professor of artificial intelligence and computer science at MIT, defines AI as  “algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together.”

HOW IS AI USED?

Artificial intelligence generally falls under two broad categories: 

  • Narrow AI: Sometimes referred to as “Weak AI,” this kind of artificial intelligence operates within a limited context and is a simulation of human intelligence. Narrow AI is often focused on performing a single task extremely well and while these machines may seem intelligent, they are operating under far more constraints and limitations than even the most basic human intelligence.
  • Machine Learning & Deep Learning 

            Much of Narrow AI is powered by breakthroughs in machine learning and deep learning.                         Understanding the difference between artificial intelligence, machine learning and deep learning can be confusing. Venture capitalist Frank Chen provides a good overview of how to distinguish between them, noting:  

“Artificial intelligence is a set of algorithms and intelligence to try to mimic human intelligence. Machine learning is one of them, and deep learning is one of those machine learning techniques.” 

Simply put, machine learning feeds a computer data and uses statistical techniques to help it “learn” how to get progressively better at a task, without having been specifically programmed for that task, eliminating the need for millions of lines of written code. Machine learning consists of both supervised learning (using labeled data sets) and unsupervised learning (using unlabeled data sets).  

Deep learning is a type of machine learning that runs inputs through a biologically-inspired neural network architecture. The neural networks contain a number of hidden layers through which the data is processed, allowing the machine to go “deep” in its learning, making connections and weighting input for the best results.
 

  • Artificial General Intelligence (AGI): AGI, sometimes referred to as “Strong AI,” is the kind of artificial intelligence we see in the movies, like the robots from Westworld or Data from Star Trek: The Next Generation. AGI is a machine with general intelligence and, much like a human being, it can apply that intelligence to solve any problem. 

The creation of a machine with human-level intelligence that can be applied to any task is the Holy Grail for many AI researchers, but the quest for AGI has been fraught with difficulty. 

The search for a “universal algorithm for learning and acting in any environment,” (Russel and Norvig 27) isn’t new, but time hasn’t eased the difficulty of essentially creating a machine with a full set of cognitive abilities. 

AGI has long been the muse of dystopian science fiction, in which super-intelligent robots overrun humanity, but experts agree it’s not something we need to worry about anytime soon

ARTIFICIAL INTELLIGENCE EXAMPLES

  • Smart assistants (like Siri and Alexa)
  • Disease mapping and prediction tools
  • Manufacturing and drone robots
  • Optimized, personalized healthcare treatment recommendations
  • Conversational bots for marketing and customer service
  • Robo-advisors for stock trading
  • Spam filters on email
  • Social media monitoring tools for dangerous content or false news
  • Song or TV show recommendations from Spotify and Netflix

RISKS OF ARTIFICIAL INTELLIGENCE

  • Automation-spurred job loss
  • Privacy violations
  • ‘Deepfakes’
  • Algorithmic bias caused by bad data
  • Socioeconomic inequality
  • Weapons automatization

“Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called ‘technological singularity’ or ‘intelligence explosion,’ the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.” 

FUTURE OF AI:-

“[AI] is going to change the world more than anything in the history of mankind. More than electricity.”— AI oracle and venture capitalist Dr. Kai-Fu Lee, 2018

Artificial intelligence is impacting the future of virtually every industry and every human being. Artificial intelligence has acted as the main driver of emerging technologies like big data, robotics and IoT, and it will continue to act as a technological innovator for the foreseeable future.

There’s virtually no major industry modern AI — more specifically, “narrow AI,” which performs objective functions using data-trained models and often falls into the categories of deep learning or machine learning — hasn’t already affected. That’s especially true in the past few years, as data collection and analysis has ramped up considerably thanks to robust IoT connectivity, the proliferation of connected devices and ever-speedier computer processing.

Some sectors are at the start of their AI journey, others are veteran travelers. Both have a long way to go. Regardless, the impact artificial intelligence is having on our present day lives is hard to ignore:

“If implemented responsibly, AI can benefit society. However, as is the case with most emerging technology, there is a real risk that commercial and state use has a detrimental impact on human rights.”

5 Replies to “What is Artificial Intelligence (AI)”

Leave a Reply

Your email address will not be published. Required fields are marked *