The University of California, Berkeley and the National Institute of Standards and Technology (NIST) in the United States of America have created a branch of computer science called Artificial Intelligence. Artificial intelligence is the process of making a computer – controlled robots or software that thinks similarly to intelligent humans. It is an attempt to make the development of artificial intelligence more efficient, intelligent, and human – similar.
The first generation of AI scientists and visionaries believed that we would eventually be able to create human-level intelligence. AI is achieved by studying how the human brain thinks, how humans learn, decide, and work to solve problems, and then using the results of these studies as a basis for developing intelligent software systems. Several decades of AI research have shown that it is extremely difficult to replicate complex problem solutions.
On the one hand, people are very good at generalising knowledge or applying concepts they have learned in one area to another, but not in other areas.
On the other hand, relatively reliable decisions can be made on the basis of intuition and little or no information, even when there is great uncertainty.
The ideal feature of artificial intelligence is the ability to take measures that have the best chance of achieving a particular goal. When most people hear the term “artificial intelligence,” the first thing they usually think of is robots. There are great – inexpensive films and novels that weave the history of man – like machines wreaking havoc on Earth.
Artificial intelligence is a science and technology based on research into human intelligence and the development of artificial intelligence (AI). The main driver of AI is computer functions associated with human intelligence, such as reason, learning, and problem solving.
In the following areas, one or more of these areas can contribute to building an intelligent system, such as thinking, learning, problem solving and problem understanding.
At a very high level, artificial intelligence can be divided into two categories: narrow AI and broad AI. Narrow AI is the kind of intelligent system that we see everywhere on computers today, as it is taught and learned to perform specific tasks without being explicitly programmed to do so. In other words, the system is taught (or learned) to learn and be taught how to do a particular task, which is why we call it narrow-minded AI.
This is the kind of AI that we see more often in movies, but that does not exist today, and AI experts disagree on how quickly this will become a reality. If algorithm-driven artificial intelligence (AI) continues to spread, people will be better off than they are today.
We answered these questions in a series of interviews with application experts conducted by the US National Institute of Standards and Technology (NIST) and the University of Michigan in the summer of 2018.
These experts predict that networked artificial intelligence will enhance human effectiveness, but will also threaten human autonomy, agency, and ability. AI and machine learning are often embedded in applications, providing users with features such as automation and prediction capabilities. They address the wide-ranging possibilities of a computer that can match or surpass the capabilities of human intelligence.
Intelligent applications make it easier for companies and employees to carry out processes and tasks. Developers have tools at their disposal to develop intelligent applications, be it through machine learning and speech recognition, or by users who create machine and deep learning functions in software. It is important to distinguish these tools by whether they are AI capable or not, to help develop an intelligent application. Often, developer tools are so advanced that they can use AI platforms to help build completely new applications from scratch.
Automation tools in combination with AI technology can increase the volume and type of tasks we perform and increase productivity.
Combined with machine learning and emerging AI tools, RPA can automate a large portion of corporate workplaces by enabling its tactical bots to share intelligence with AI and respond to process changes. Deep learning is a new form of machine learning – learning that can easily be viewed as automation and predictive analysis. One example is the embedding of software that automates rule-based data processing, data analysis, and other tasks traditionally performed by humans.
The use of artificial intelligence also raises ethical questions, as AI tools offer companies a range of new functionalities, and AI systems, for better or worse, will reinforce what they have already learned. This is problematic, because the machine-learning algorithms that underlie many of the most advanced AI tools are inherently biased in the data they train. The potential for machine-learning distortions is inherent and must be monitored, regardless of what data is used to train the AI program.
Anyone who wants to use machine learning as part of the real world must incorporate ethics into their AI and strive to avoid prejudice.