Seminar Report (ECE) PDF

Title Seminar Report (ECE)
Author Kanika Chib
Course Optics & Waves
Institution Kurukshetra University
Pages 22
File Size 551.3 KB
File Type PDF
Total Downloads 38
Total Views 181

Summary

This document is based on a Seminar report related to Artificial Intelligence...


Description

Seminar Report on

ARTIFICAL INTELLIGENCE Submitted in partial fulfillment of the requirement for the award of the degree of Bachelor of Technology In Electronics & Communication Engg. Submitted By : Palak 1215024 Batch: 2015-2019 (JMIT)

Department of Electronics & Communication Engg. Seth Jai Parkash Mukand Lal Institute of Engg. & Technology, Radaur – 135133 (Yamuna Nagar ) (Affiliated to Kurukshetra University, Kurukshetra, Haryana, India)

ACKNOWLEDGEMENT I would like to express my greatest gratitude to the people who have helped & supported me throughout my project. I am grateful to my mentor Mr. Vishal Chaudhary (Head Of Department) for his continuous support for the project, from initial advice & encouragement to this day, under whose supervision I completed my project. I wish to thank our parents for their undivided support and interest who inspired me and encouraged me to go our own way, without whom I would be unable to complete my project. At last but not the least, I want to thank my friends who appreciated my work and motivated me and finally to the God who made all the things possible. This project report would not have been completed without the references taken from the search engine Google; which made the report accurate.

Palak Roll No.1215024

TABLE OF CONTENTS S.NO

NAME

1.

ABSTRACT

3

2.

INTRODUCTION

4

3. 4. 5. 6.

HISTORY ADV.&DIS ADV. BASIC COMPONENTS PROGRAMMING LANG.

PAGE NO.

5 6 7 10

7.

MYTHS ABOUT AI

11

8.

ARHITECTURE

15

9.

APPLICANCES

17

10.

CATEGORIES

17

11.

SCOPE

18

12.

CONCLUSION

20

13.

REFERENCES

21

ABSTRACT Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software think intelligently, in the similar manner the intelligent humans think.AI is accomplished by studying how human brain thinks, and how humans learn, decide, and work while trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and systems. Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. According to Textbooks, the Artificial Intelligence is “the study and design of intelligent agents, where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success”. The use of Artificial Intelligence methods is becoming increasingly common in the modeling and forecasting of hydrological and water resource processes. Artificial intelligence (AI) is the field of scientific inquiry concerned with designing mechanical systems that can simulate human mental processes. The field draws upon theoretical constructs from a wide variety of disciplines, including mathematics, psychology, linguistics, neurophysiology, computer science, and electronic engineering.

INTRODUCTION What does Artifical Intelligence mean? Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for include:    

Speech recognition Learning Planning Problem solving

Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry. Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as:       

Knowledge Reasoning Problem solving Perception Learning Planning Ability to manipulate and move objects

Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious task. Machine learning is also a core part of AI. Learning without any kind of supervision requires an ability to identify patterns in streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions. Classification determines the category an object belongs to and regression deals with obtaining a set of numerical input or output examples, thereby discovering functions enabling the generation of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory. Machine perception deals with the capability to use sensory inputs to deduce the different aspects of the world, while computer vision is the power to analyze visual inputs with a few sub-problems such as facial, object and gesture recognition. Robotics is also a major field related to AI. Robots require intelligence to handle tasks such as object manipulation and navigation, along with sub-problems of localization, motion planning and mapping.

HISTORY The term Artificial Intelligence was coined by John McCarthy, in 1956, who defines it as “the science and engineering of making intelligent machines. The field was founded on the claim that a central property of humans, intelligence. The sapience of Homo sapiens can be so precisely described that it can be simulated by a machine. This raises philosophical issues about the nature of the mind and limits of scientific hubris, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial Intelligence (AI) is the key technology in many of today’s novel applications, ranging from banking systems that detect attempted credit card fraud, to telephone systems that understand speech, to software systems that notice when you’re having problems and offer appropriate advice. These technologies would not exist today without the sustained federal support of fundamental AI research over the past three decades. Artificial Intelligence (AI) in the field of information technology focused on creating machines that can participate in behaviors that humans consider intelligent. The possibility of intelligent machines to have human curiosity since ancient times and today with the advent of computer and 50 years of research into AI programming techniques, the dream of smart machines is a reality. Researchers create systems that can mimic human thought, understand speech, then the best player chess husband, and countless benefits not possible before.

ADVANTAGES & DISADVANTAGES Advantages of AI Smarter artificial intelligence may replace human jobs, freeing people for other pursuits by automating manufacturing and transportation. 

Self-modifying, self-writing and learning software can relieve programmers of the burdensome tasks of specifying the functions of different programs.



Artificial intelligence will be used as cheap labour, thus increasing profits for corporation.



Artificial intelligence can make deployment easier and less resource intensive



Compared to traditional programming techniques, expert-system approaches provide the added flexibility (and hence easier modifiability) with the ability to model rules as data rather than as code. In situations where an organization’s IT department is overwhelmed by a software-development backlog, rule-engines, by facilitating turnaround, provide a means that can allow organizations to adapt more readily to changing needs.



In practice, modern expert-system technology is employed as an adjunct to traditional programming techniques, and this hybrid approach allows the combination of the strengths of both approaches. Thus, rule engines allow control through programs (and user interfaces) written in a traditional language, and also incorporate necessary functionality such as inter-operability with existing database technology.

Disadvantages of AI    

Rapid advances in AI could lead to massive structural unemployment. Unpredictable and unforeseen impacts of new features. An expert system or rule-based approach is not optimal for all problems, and considerable knowledge is required so as to not misapply the systems. Ease of rule creation and rule modification can be double-edged. A system can be sabotaged by a non-knowledgeable user who can easily add worthless rules or rules that conflict with existing ones. Reasons for the failure of many systems include the absence of (or neglect to employ diligently) facilities for system audit, detection of possible conflict, and rule lifecycle management (e.g. version control, or thorough testing before deployment). The problems to be addressed here are as much technological as organizational.

BASIC COMPONENTS

Many of AI’s revolutionary technologies are common buzzwords, like “natural language processing,” “deep learning,” and “predictive analytics.” Cutting-edge technologies that enable computer systems to understand the meaning of human language, learn from experience, and make predictions, respectively.Understanding AI jargon is the key to facilitating discussion about the real-world applications of this technology. The technologies are disruptive, revolutionizing the way humans interact with data and make decisions, and should be understood in basic terms by all of us.

Fig 5.1 Basic Components Machine learning, or ML, is an application of AI that provides computer systems with the ability to automatically learn and improve from experience without being explicitly programmed. ML focuses on the development of algorithms that can analyze data and make predictions. Beyond being used to predict what Netflix movies you might like, or the best route for your Uber, machine learning is being applied to healthcare, pharma, and life sciences industries to aid disease diagnosis, medical image interpretation, and accelerate drug development. Deep learning is a subset of machine learning that employs artificial neural networks that learn by processing data. Artificial neural networks mimic the biological neural networks in the human brain.Multiple layers of artificial neural networks work together to determine a single output from many inputs, for example, identifying the image of a face from a mosaic of tiles. The machines learn through positive and negative reinforcement of the tasks they carry out, which requires constant processing and reinforcement to progress. Neural networks enable deep learning. As mentioned, neural networks are computer systems modeled after neural connections in the human brain. The artificial equivalent of a human neuron is a perceptron. Just like bundles of neurons create neural networks in the brain, stacks

of perceptrons create artificial neural networks in computer systems.Neural networks learn by processing training examples. The best examples come in the form of large data sets, like, say, a set of 1,000 cat photos. By processing the many images (inputs) the machine is able to produce a single output, answering the question, “Is the image a cat or not?. Cognitive computing is another essential component of AI. Its purpose is to imitate and improve interaction between humans and machines. Cognitive computing seeks to recreate the human thought process in a computer model, in this case, by understanding human language and the meaning of images.Together, cognitive computing and artificial intelligence strive to endow machines with human-like behaviors and information processing abilities. Natural Language Processing or NLP, allows computers to interpret, recognize, and produce human language and speech. The ultimate goal of NLP is to enable seamless interaction with the machines we use every day by teaching systems to understand human language in context and produce logical responses.Real-world examples of NLP include Skype Translator, which interprets the speech of multiple languages in real-time to facilitate communication. Computer vision is a technique that implements deep learning and pattern identification to interpret the content of an image; including the graphs, tables, and pictures within PDF documents, as well as, other text and video. Computer vision is an integral field of AI, enabling computers to identify, process and interpret visual data.Applications of this technology have already begun to revolutionize industries like research & development and healthcare. Computer Vision is being used to diagnose patients faster by using Computer Vision and machine learning to evaluate patients’ x-ray scans.

PROGRAMMING LANGUAGE Companies adapting the artificial intelligence to make the user experience better to explore and communicate. Artificial Intelligence is a branch of engineering, which basically aims for making the computers which can think intelligently, in the similar manner the intelligent humans think. Here are the top languages that are most commonly used for making the AI project: 1. PYTHON

Fig 6.1 Python

Python is considered to be in the first place in the list of all AI development languages due to the simplicity. The syntaxes belonging to python are very simple and can be easily learnt. Therefore, many AI algorithms can be easily implemented in it. Python takes short development time in comparison to other languages like Java, C++ or Rupy. Python supports object oriented, functional as well as procedure oriented styles of programming. There are plenty of libraries in python, which make our tasks easier. For example: Numpy is a library for python that helps us to solve many scientific computations. Also, we have Pybrain, which is for using machine learning in Python. 2. R. Fig 6.2 R. R is one of the most effective language and environment for analyzing and manipulating the data for statistical purposes. Using R, we can easily produce well-designed publication-quality plot, including mathematical symbols and formulae where needed. Apart from being a general purpose language, R has numerous of packages like RODBC, Gmodels, Class and Tm which are used in the field of machine learning. These packages make the implementation of machine .

3. Lisp

Fig 6.3 Lisp Lisp is one of the oldest and the most suited languages for the development in AI. It was invented by John McCarthy, the father of Artificial Intelligence in 1958. It has the capability of processing the symbolic information effectively. It is also known for its excellent prototyping capabilities and easy dynamic creation of new objects, with automatic garbage collection. Its development cycle allows interactive evaluation of expressions and recompilation of functions or file while the program is still running. Over the years, due to advancement, many of these features have migrated into many other languages thereby affecting the uniqueness of Lisp. 4. Prolog This language stays alongside Lisp when we talk about development in AI field. The features provided by it include efficient pattern matching, tree-based data structuring and automatic backtracking. All these features provide a surprisingly powerful and flexible programming framework. Prolog is widely used for working on medical projects and also for designing expert AI systems. 5. Java

Fig 6.5 Java Java can also be considered as a good choice for AI development. Artificial intelligence has lot to do with search algorithms, artificial neural networks and genetic programming. Java provides many benefits: easy use, debugging ease, package services, simplified work with large-scale projects, graphical representation of data and better user interaction. It also has the incorporation of Swing and SWT (the Standard Widget Toolkit). These tools make graphics and interfaces look appealing and sophisticated.

MYTHS ABOUT AI HOW CAN AI BE DANGEROUS? Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely: 1. The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase. 2. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met. As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety research is to never place humanity in the position of those ants. WHY THE RECENT INTEREST IN AI SAFETY Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology have recently expressed concern in the media and via open letters about the risks posed by AI, joined by many leading AI researchers. Why is the subject suddenly in the headlines? The idea that the quest for strong AI would ultimately succeed was long thought of as science fiction, centuries or more away. However, thanks to recent breakthroughs, many AI milestones, which experts viewed as decades away merely five years ago, have now been reached, making many experts take seriously the possibility of superintelligence in our lifetime. While some experts still guess that human-

level AI is centuries away, most AI researches at the 2015 Puerto Rico Conference guessed that it would happen before 2060. Since it may take decades to complete the required safety research, it is prudent to start it now.Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave. We can’t use past technological developments as much of a basis because we’ve never created anything that has the ability to, wittingly or unwittingly, outsmart us. The best example of what we could face may be our own evolution. People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest. If we’re no longer the smartest, are we assured to remain in control?FLI’s position is that our civilization will flourish as long as we win the race between the growing power of technology and the wisdom with which we manage it. In the case of AI technology, FLI’s ...


Similar Free PDFs