Kerala Plus Two Computer Science Notes Chapter 11 Advances in Computing
The distributed computing system uses multiple computers to solve large scale problems over the internet.
Distributed Computing Paradigms
Distributed computing ensures that computer systems are used effectively. It is a method of computer processing in which different parts of a program are run simultaneously on two or more computers that are communicating with each other over a network.
The term paradigm means a pattern or a model in the study of any subject of complexity. Advanced computing paradigms are essential to extend our ability to process information for many sectors of our 1 society.
It is a method of computing in which large problems can be divided into many small problems which are distributed to many computers. It provides a way to reduce the time it takes to perform a large task. In these, all the different processors have their own private non-sharable memory. This information is exchanged between the processors solely on the basis of messages.
- Economical: Distributed computing reduces overall computing cost.
- Speed: By spreading the computational load across different nodes, each location is under less stress, as a result of which speed increases.
- Reliability: Distributed systems can continue to function even if one node ceases to function.
- Scalability: In distributed computing, the number of nodes can vary according to demand.
- Complexities: A lot of extra programming is required to set up a distributed system.
- Security: Information needs to be passed to the computers in the network. It can be tracked and used for illegal purpose.
- Network reliance: Distributed system is connected through a network and in case of network failure, the entire system may become unstable.
Advanced computing paradigms include
1. Parallel computing:
It is a form of computation in which many calculations are carried out simultaneously operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently. In parallel computing, all the different processors have access to shared memory. This memory can also be used to share information between different processors rather than explicitly sending messages.
Some of the fields in which parallel computing is applied are weather forecasting, nuclear sciences, aerospace engineering, etc.
In parallel computing environment even when one or more nodes fails, the whole system still work with reduced performance.
Each user can share the computing power and the storage resources in the system with other users.
Distributing several tasks to different nodes can help sharing of load to the whole system. It is called load sharing.
It is easily expandable. It can scale to large extend.
- Parallel applications are much more complex than corresponding serial applications.
- A program may run on one machine, but when ported to a different computer, significant changes must be made in order to allow the program to run properly.
2. Grid computing:
It is described as a world in computational power (resources, services, data) is readily available. Here users get access to computational power just like electricity through wall side with no care or consideration for where or how the electricity is actually generated. Computers on a grid have a program on them, that allows unused resources to be used by another computer on the grid.
The speed of the connections between the computers on the grid are relatively slow, therefore processing tasks are broken up into independent parts and sent to different computers on the grid. When a computer completes its part, it send the result back to the server. Services over grids can be of many types; knowledge grid, data grid, computational grid etc.
Grid computing is used in disaster management, weather forecasting, market forecasting, bio-informatics, etc.
- Grids are capable to solve larger, more complex problems in a short time.
- Grid computing makes better use of existing hardware.
- Scalable easy to increase computing power by adding desktops or servers.
- The interconnection between computers is slower and therefore affect processing speed.
- Licensing issues across different servers computers may affect working with certain applications.
3. Cluster computing:
It is a form of computing in which a group of personal computers, storage devices etc. 1 are linked together so that they can work like a single computer. The components of a cluster are connected to each other through fast local area networks. Clusters provide computational power through parallel lei processing. It is a relatively low-cost form of a parallel processing machine used for scientific and other applications. Another reason for using clusters is to provide fault tolerance, i.e., to ensure that computational power is always available.
Clusters have evolved to support applications like e-commerce. Linux is most widely used operating system for cluster computers.
- Price performance ratio: Cluster com putting significantly reduces the cost of pro1 cessing power.
- Availability: If any one of the system components fails, the system as a whole stays highly available,
- Scalability: Processors and nodes can be added to a cluster whenever demand increases.
- Programmability issues: If there are differences in the software used in different computers, there may be issues while combining all of them as a single entity.
- Problem in finding fault: Since we deal with a single entity, we may have to face the difficulty in identifying the problematic component.
4. Cloud computing:
It is an emerging computing technology that uses the internet and central remote servers to maintain data and applications, It refers to the use of computing resources that reside on a remote machine and are delivered to the end user as a service over a network, eg., e-mail service.
Cloud computing is a computing
model, where resources such as computing power, storage, network and software are combined and provided as services on the internet in a remotely accessible fashion, To use the cloud computing environment, internet access and an account with a cloud service provides is required, i
Cloud Service Models: Cloud providers offer services that can be grouped into three major services:
i. Software as ab Service (SaaS): It gives subscribers access to both resources and applications. Here a complete application is offered to the customer, as a service on demand. Consumers purchase the ability to access and use and an application or service that is hosted in the cloud.
SaaS makes it unnecessary for us to have a copy of software to be installed on i our devices like desktop, laptop, mobile, etc. SaaS also makes it easier to have the same version of software on all of your devices at once by accessing it on the cloud. In a SaaS agreement, customers have the least control over the cloud. On the customer’s side, there is no need for high investment in servers or software licenses, while for the provider, the costs are lowered since only a single application needs to be hosted and maintained.
SaaS is offered by companies such as Adobe, Microsoft, etc.
ii. Platform as a Service (PaaS):
A PaaS system goes a level above the software as a Service setup. A PaaS provider gives subscribers access to the components that they 1 require to develop and operate applications over the internet. The customer has the freedom to build his own applications, which run on the provider’s infrastructure. To meet manageability and scalability requirements of the applications, PaaS providers offer a predefined combination of OS and application servers, such as LAMP platform, ASP.NET, PHP, Python, etc.
iii. Infrastructure as a Service (IaaS):
In this, as the name states, deals primarily with computational infrastructure. IaaS provides basic storage and computing capabilities as standardized services over the network. The customer would typically set up his own software on the infrastructure, eg., Amazon Web Servers, Joyent, AT&T.
- Cost savings: Companies can reduce their capital expenditure and use operational expenditures for increasing their computing capabilities.
- Scalability/Flexibility: Companies can start with a small deployment and grow to a large deployment fairly rapidly, and then scale back if necessary. Also, the flexibility of cloud computing allows companies to use extra resources at peak times, enabling them to satisfy consumer demands.
- Reliability: Services using multiple redundant sites help in disaster recovery.
- Maintenance: Cloud service providers do the system maintenance requirements.
- Mobile accessible: Employees who travel as part of their job are also able to give better productivity since the systems they use are accessible from anywhere.
- Security and privacy: Whenever data or a program is sent on a publicly accessible communication system and data is stored in a shared disk system, there is a danger of stealing or corrupting the data in the disk storage.
- Lack of standards: Clouds have no standards and thus it is unlikely that most clouds are interoperable.
Artificial Intelligence (AI)
AI is defined as developing computer programs to solve complex problems by application of processes that are similar to human reasoning processes. AI currently encompasses a huge variety of subfields, from general-purpose areas such as perception and logical reasoning to specific tasks such as playing chess, proving mathematical theorems, computer vision, natural language processing, medical diagnosis, etc.
Knowledge Pyramid: It is evident that the knowledge and intelligence that comes at the top of the pyramid are the major areas of study under AI.
Symbols: At the base of the pyramid are symbols which form the basic means of representation.
Data: It is a collection of mere symbols. Information: While processing data we get information.
Knowledge: It is organised information. Intelligence: The ability to draw useful inferences from the available knowledge is generally called intelligence.
Wisdom: It is the maturity of mind that directs its intelligence to achieve desirable goals.
Turing Test approach to AI
The Turing Test, proposed by Alan Turing was designed to provide a satisfactory operational definition of intelligence. It is the ultimate test a machine must pass in order to be called as intelligent and for now, programming a computer to pass the test provides plenty to work. In that case the computer would need to possess the following capabilities:
- Natural Language Processing (NLP): To enable to communicate successfully in English. Automatic speech recognition, speech synthesis, machine translation, handwritten character recognition are some of the practical applications associated with NLP.
- Knowledge representation: To incorporate human knowledge before or during the interrogation.
- Automated reasoning: To use the knowledge to answer questions and to draw new conclusions.
- Machine learning: To adapt o new circumstances and to detect and deduce patterns.
To pass the total Turing Test, the computer will also need the following:
Computer vision: The capability to observe objects. If a machine to have the capability of vision, it must also perform the activities including image acquisition, transformation, processing, analysis, understanding and interpretation.
Robotic activities: To make the robot a
little smarter, intelligence must be imbibed in it. To cope with the changing environment, intelligent sensors are to form a part of the robot which can sense the environment and supply necessary signals for its intelligent control unit.
Computational Intelligence (Cl)
It includes elements of learning, adaptation, evolution and fuzzy linguistic to ereate programs that are, in some sense, intelligent. Computational Intelligence experts focus problems that are difficult to solve using artificial systems but are solved by human and some animals using intelligence.
Cybernetics: It is defined as the study of ! control the communication between man 1 and machines.
The enormous success achieved through the modeling of biologically inspired algorithms to stimulate natural intelligence resulted in the development of intelligent systems. These intelligent algorithms form part of the field of AI. These include:
1. Artificial Neural Networks (ANN):
The ability to learn, memorise and still generalise, prompted research in algorithmic modeling of biological neural systems referred to as ANN. The neurons are arranged in approximately 1000 main modules, each having about 500 neural networks. Current success in neural modeling are for small ANNs aimed at solving a specific task.
2. Evolutionary Computation (EC):
EC has its objective to mimic processes from natural evolution, where the main concept is survival of the fittest, i.e., the weak must die. In a natural evolution, survival is achieved through reproduction. Those individuals that inherit bad characteristics are weak and lose the battle to survive.
Evolutionary algorithms use a population of individuals, where an individual is referred to as a chromosome. A chromosome defines the characteristics of individuals in the population. Each characteristic is referred to as gene. The value of gene is an allele. Evolutionary computation has been used successfully in real-world applications like data mining, fault diagnosis, classification, scheduling, etc.
3. Swarm Intelligence (SI): It is originated from the study of colonies or swarms of social organisms. Studies of the social behavior of organisms (individuals) in swarms prompted the design of very efficient algorithms.
4. Fuzzy Logic: Fuzzy sets and fuzzy logic allow what is referred to as approximate reasoning. With fuzzy sets, an element belongs to a set to a certain degree of certainty Fuzzy logic allows reasoning with these uncertain facts to infer new facts, with a degree of certainty associated with each fact In a sense, fuzzy sets and logic allow the modeling of common sense.
Fuzzy systems have been applied successfully to control gear transmission and raking systems in vehicles, controlling lifts, home appliances, controlling traffic signals, etc.
Applications of Computational Intelligence
Main applications are:
1. Biometrics: It refers to metrics (measurements) related to human characteristics traits. Biometrics authentication is used in the identification of an individual. Biometrics identifiers are the distinctive, measurable characteristics used to label and describe individuals.
Biometric identification is popularly used in attendance management system authentication in computers and other devices, Adhar cards, etc.
2. Robotics: Robotics can be defined as the I scientific study associated with design, fabrication, theory and application of robots. I Robotics is a branch which spans over mechanical engineering, electrical engineering ing and science. It includes the design, construction, operation and application of ro hots as well as computer systems for their control, sensory feedback and information processing.
Some of the applications of robots:
- Uses in vehicle manufacturing industry: Robotics arms are used in the vehicle manufacturing process.
- Exploration of outer space: Manipulative arms that are controlled by human are l used to unload the docking bay of space shut! ties to launch satellites or to construct a space station.
- In intelligent homes: Automated systems can now monitor home security, environmental conditions and energy usage.
- Exploration in difficult environments: Robots can visit environments that are harmful to humans.
- Uses in the military: Nowadays airborne robots (drones) are used by modern armies. Today drones are mostly used for surveillance purpose.
- Uses in agriculture: In developed countries, large farms use automated harvesters that can cut and gather crops. Robotic dairies allow operators to feed and milk animals remotely.
3. Computer vision: It is the construction of explicit, meaningful descriptions of the structure and the properties of the 3-dimensional world from 2-dimensional images. Computer vision acquires 3-dimensional shape and other properties of objects based ! on their 2-dimensional (projection) images 1 through the use of computers and cameras, It is also called image understanding. The image data can take many forms, such as video sequences, views from multiple cameras or multi-dimensional data from a medical scanner.
Computer vision was initially developed for military applications. It is an important component of artificial intelligence and robotics.
4. Natural Language Processing (NLP):
It is a subfield of Artificial Intelligence (AI). It allows people to interact with computers without any specialised knowledge. This implies that anybody can simply talk to the computer in their own language. There is no need to learn any programming language NLP computer does two things.
Natural Language Understanding (NLU): It is about understanding and reasoning the input, which is any natural language like English, Malayalam, etc.
Natural Language Generation (NLG): It deals with creation of output.
5. Automatic Speech Recognition (ASR): It is referred to artificial intelligence methods of communicating with a computer in a natural language like Malayalam. ASR is one of the fastest growing and commercially most promising applications of natural language processing technology. This can be accomplished by developing an ASR system which allows a computer to identify the words that a person speaks into a microphone or telephone and convert it into written text. As a result it has the potential of being an important mode of interaction between human and computers,
The ASR system would support many valuable applications like dictation, command and control, voice dialing, spoken database querying, office dictation devices and automatic voice translation into I foreign languages etc.
6. Optical Character Recognition (OCR) and Handwritten Character Recognition (HCR) Systems: OCR is a software that converts the scanned images of printed text (numerals, letters or symbols) into computer processable format (such as ASCII). At present, reasonably good OCR packages are available for most of the languages.
According to the way in which hand] writing data is generated, two different approaches are present in HCR. They are online and offline. In former, the data are captured during the writing process by a special pen on an electronic surface. In the latter, the data are acquired by a scanner after the writing process is over.
7. Bioinformatics: It is the application of computer technology to the management of biological information. Computers are used to gather, store, analyze and integrate biological and genetic information which can then be applied to gene-based data discovery and development. The need for bioinformatics capabilities has been accelerated by the explosion of publicly available genomic information resulting from the Human Genome Project. The aims of bioinformatics are three-fold.
- It organizes data in a way that allows researchers to access existing information and to submit new entries as they are produced.
- To develop tools and resources that aid in the analysis of data.
- To use these tools to analyse the data and interpret the results in a biologically meaningful manner.
8. Geographic Information System (GIS):
It is developed from digital cartography and Computer Aided Design (CAD) database management system.GIS is a computer system for capturing, storing, checking and displaying data related to various positions on earth’s surface. GIS can show many different kinds of data on a map. This enables people to easily see, analyse and understand patterns and relationship.
GIS can be applied in various areas like soil mapping, agricultural mapping, forest mapping, e-Governance, water resource management, natural disaster assessment, etc. It also used in strategic urban planning, infrastructure planning, precision agriculture planning, etc.