What is AI in intelligence?

What is AI in intelligence?

What is AI in intelligence?

What is artificial intelligence (AI)?

Artificial intelligence (AI) is a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see , understand and translate spoken and written language , analyze data , make recommendations, and more.

AI is the backbone of modern computing innovation, unlocking value for individuals and businesses. For example, optical character recognition (OCR) uses AI to extract text and data from images and documents, convert unstructured content into structured data that can be directly used by enterprises, and uncover valuable data insights.  

Ready to get started? New customers get a $300 credit toward Google Cloud.

Google has been named an industry leader in Gartner®’s 2022 Magic Quadrant™ for Cloud AI Developer Services™ report.
get report

Definition of artificial intelligence

Artificial intelligence is the field of science that builds computers and machines that can reason, learn, and act that often require human intelligence or involve data on a scale beyond a human’s ability to analyze.

AI is a broad field that encompasses many different disciplines, including computer science, data analysis and statistics, hardware and software engineering, linguistics, neurology, and even philosophy and psychology.

At the operational level for business use, AI is a set of technologies primarily based on machine learning and deep learning for data analysis, prediction, object classification, natural language processing, recommendation, intelligent data retrieval, and more.

Types of Artificial Intelligence

AI can be organized in a variety of ways, depending on the stage of development or the operation being performed.

For example, AI development is typically divided into four phases.

  1. Reactive Machines : Limited AI that only responds to different types of stimuli according to preprogrammed rules. Does not use memory, so cannot learn from new data. IBM’s Deep Blue supercomputer, which defeated chess champion Garry Kasparov in 1997, is an example of a reactive machine.
  2. Limited memory : Most modern AIs are considered AIs with limited memory. It can use memory to improve over time by training it with new data (usually through an artificial neural network or other training model). Deep learning is a part of machine learning considered as artificial intelligence with limited memory.
  3. Theory of Mind : Theory of Mind AI does not currently exist, but research is realizing its possibility. It describes AI that can simulate human thinking and have the same decision-making capabilities as humans, including recognizing and remembering emotions and responding like humans in social situations. 
  4. Self-Awareness : Self-aware AI goes a step further than Theory of Mind AI, which describes a mysterious machine that knows its own existence and possesses human intellectual and emotional capabilities. Like theory of mind AI, self-aware AI does not currently exist.

A more useful way to broadly classify types of AI is by what machines can do. All artificial intelligence that we are currently talking about is considered “narrow” because it can only perform a narrow set of operations based on its programming and training. For example, AI algorithms for object classification cannot perform natural language processing. Google search is a narrow AI, as is predictive analytics or virtual assistants.

Artificial General Intelligence (AGI) is the idea that machines can “sense, think and act” like humans. AGI does not currently exist. The next level will be artificial superintelligence (ASI), where machines can outperform humans in all respects.

AI training model

When businesses talk about AI, they often talk about “training data.” What does “training data” mean? Remember, limited-memory AI is AI that improves over time by being trained on new data. Machine learning is a subset of artificial intelligence that uses algorithms trained on data to achieve results.

Broadly speaking, three learning models are often used in machine learning:

Supervised learning : A machine learning model that uses labeled training data (structured data) to map specific inputs to outputs. In simple terms, to train an algorithm to recognize pictures of cats, it is fed pictures labeled as cats.

Unsupervised learning : A machine learning model that learns patterns from unlabeled (unstructured) data. Unlike supervised learning, the final outcome is not known in advance. Instead, the algorithm learns from the data, classifying it according to its characteristics.For example, unsupervised learning is good at pattern matching and descriptive modeling.

In addition to supervised and unsupervised learning, people often employ a hybrid approach called “semi-supervised learning,” in which only part of the data is labeled. In semi-supervised learning, the end result is known, but the algorithm must decide how to organize and structure the data to achieve the desired result.

Reinforcement Learning : A machine learning model that can be broadly described as “learning by doing”. The “agent” learns to perform a defined task through trial and error (feedback loop) until its performance is within a desired range. When the agent performs a task well, it receives positive reinforcement; when the agent performs poorly, it receives negative reinforcement. An example of reinforcement learning is teaching a robot hand to pick up a ball.

Common types of artificial neural networks

A common training model in AI is an artificial neural network (a model loosely based on the human brain).

A neural network is a system of artificial neurons (sometimes called a perceptron) that are computing nodes used to classify and analyze data. Data is fed into the first layer of the neural network, and each perceptron makes a decision, then passes that information on to multiple nodes in the next layer. Training models with more than three layers is called a “deep neural network” or “deep learning.” Some modern neural networks have hundreds or thousands of layers. The output of the final perceptron completes the neural network’s set of tasks, such as classifying objects or finding patterns in data.

Some of the most common types of artificial neural networks you’re likely to encounter include:

Feedforward Neural Network (FF) : An earliest form of neural network in which data flows unidirectionally through layers of artificial neurons until an output is obtained. In modern times, most feedforward neural networks are considered “deep feedforward neural networks” with multiple layers (and multiple “hidden” layers). Feedforward neural networks are often paired with an error-correcting algorithm called “backpropagation”. In simple terms, the algorithm starts with the results of the neural network and works backwards to the beginning, finding errors to improve the accuracy of the neural network. Many simple but powerful neural networks are deep feed-forward neural networks.

Recurrent Neural Network (RNN) : A type of neural network that differs from feedforward neural networks in that they typically use time-series data or data involving sequences. Unlike feed-forward neural networks, which use weights in each node of the network, recurrent neural networks have a “memory” of what happened in the previous layer, depending on the output of the current layer. For example, when performing natural language processing, RNNs can “remember” other words used in a sentence. RNNs are commonly used in speech recognition, translation, and photo captioning.

Long/Short Term Memory (LSTM) : An advanced form of RNN that can use memory to “remember” what happened in previous layers. The difference between RNNs and LSTMs is that LSTMs can remember what happened several layers ago by using “memory cells”. [Annotation: LTSM in the original English may be a typo of LSTM] LSTM is often used in speech recognition and prediction.

Convolutional Neural Network (CNN) : A neural network that containsSome of the most common neural networks in modern artificial intelligence. CNNs are most commonly used for image recognition and use several different layers (a convolutional layer followed by a pooling layer) that filter different parts of an image before putting it back together (in a fully connected layer). An earlier convolutional layer might look for simple features of the image, such as color and edges, and then look for more complex features in additional layers.

Generative Adversarial Network (GAN) : A network that involves two neural networks competing against each other in a game that ultimately improves the accuracy of their output. One network (generator) creates samples that another network (discriminator) tries to prove to be true or false. GANs are used to make realistic pictures and even to create artwork.

Advantages of AI

automation

AI can automate workflows and processes, or work independently and independently of human teams. For example, AI can help automate aspects of information security by continuously monitoring and analyzing network traffic. Likewise, smart factories may use dozens of different types of AI, such as robots using computer vision to move around a factory floor or inspect products for defects, create digital twins, or use real-time analytics to measure efficiency and output.

reduce human error

AI can eliminate human error in data processing, analysis, manufacturing assembly, and other tasks through automated functions and algorithms that follow the same process every time.

Eliminate Duplicate Tasks

AI can be used to perform repetitive tasks, freeing up human resources to solve high-impact problems. AI can be used to automate processes such as verifying documents, transcribing phone calls, or answering simple customer questions like “What time do you close?” Robots are often used to perform “dull, dirty or dangerous” tasks in place of humans.

fast and accurate

AI can process more information faster than humans can, finding patterns and discovering data relationships that humans might miss.

unlimited availability

AI is not limited by time of day, rest needs, or other human burdens. When running in the cloud, AI and machine learning can be “always on” to continuously process assigned tasks.

Faster development speed

The ability to quickly analyze large amounts of data can speed up the time to R&D breakthroughs. For example, AI has been used for predictive modeling of potential new drug therapies, or to quantify the human genome.

Leave a Reply

Your email address will not be published. Required fields are marked *