|
hidup dgn kesabaran
Wednesday 8 August 2012
computer components
computer of classification
Classification Of Computer
by KISHORE on FEBRUARY 7, 2010
Until recently computers were classifieds as microcomputers, super minicomputers, mainframes, and supercomputers. Technology, however, has changed and this classification is no more relevant. Today all computers used microprocessors as their CPU. Thus classification is possible only through their mode of use. Based on mode of use we can classify computers as Palms, Laptop PCs, Desktop PCs and Workstations. Based on interconnected computers we can classify computers we can classify them as distributed computers and parallel computers.
Laptop PCs (also known as notebook computers) are portable computers weighing around 2 kgs. They have a keyboard, flat screen liquid crystal display, and a Pentium or Power PC processor. Colour displays are available. They normally run using WINDOWS OS. Laptops come with hard disk (around 40 GB), CDROM and floppy disk. They should run with batteries and are thus designed to conserve energy by using power efficient chips. Many Laptops can be connected to a network. There is a trend towards providing wireless connectivity to Laptops so that they can read files from large stationary computers. The most common use of Laptop computers is used for word processing, and spreadsheet computing. As Laptops use miniature components which have to consume low power and have to be packaged in small volumes.
While manufacturers such as IBM, SUN and Silicon Graphics have been manufacturing high performance workstations the speed of Intel Pentium Processors has been going up. In 2004, Pentium with clock speed 3 GHz are available. They can support several GB main memories. Thus the difference between high end PCs and Workstations is vanishing. Today companies such as SUN make Intel based workstations.While Workstations are characterized by high performance processors with large screens for interactive programming, servers are used for specific purpose such as high performance numerical computing (called compute server), web page hosting, database store, printing etc. interactive large screens are not necessary. Compute servers have performance processors with large main memory, database servers have big on-line disk storage (100s of GB) and print servers support several high speed printers.
Palm PCs or Simputer
With miniaturization and high-density packing of transistor on a chip, computers with capabilities nearly that of PCs which can be held in a palm have emerged. Palm accept handwritten inputs using an electronic pen which can be used to write on a Palm’s screen (besides a tiny keyboard), have small disk storage and can be connected to a wireless network. One has to train the system on the user’s handwriting before it can be used as a mobile phone, Fax, and e-mail machine. A version of Microsoft operating system called Windows-CE is available for Palm.An Indian initiative to meet the needs of rural population of developing countries is called Simputer. Simputer is a mobile handheld computer with input through icons on a touch sensitive overly on the LCD display panel. A unique feature of Simputer is the use of free open source OS called GNU/Linux. The cost of ownership is thus low as there is no software cost for OS. Another unique feature of Simputer not found in Palm, is a smart card reader/writer, which increases the functionality of the Simputer including possibility of personalization of a single Simputer for several users.
Laptop PCs:
Laptop PCs (also known as notebook computers) are portable computers weighing around 2 kgs. They have a keyboard, flat screen liquid crystal display, and a Pentium or Power PC processor. Colour displays are available. They normally run using WINDOWS OS. Laptops come with hard disk (around 40 GB), CDROM and floppy disk. They should run with batteries and are thus designed to conserve energy by using power efficient chips. Many Laptops can be connected to a network. There is a trend towards providing wireless connectivity to Laptops so that they can read files from large stationary computers. The most common use of Laptop computers is used for word processing, and spreadsheet computing. As Laptops use miniature components which have to consume low power and have to be packaged in small volumes.
Personal Computers (PCs)
The most popular PCs are desktop machines. Early PCs had Intel 8088 microprocessors as their CPU. Currently (2004), Intel Dual Core is the most popular processor. The machines made by IBM are called IBM PCs. Other manufacturers use IBM’s specifications and design their own PCs. They are known as IBM compatible PCs. IBM PCs mostly use MS-Windows, WINDOWS –XP or GNU/Linux as Operating System. IBM PCs, nowadays (2004) have 64 to 256 MB main memory, 40 to 80 GB of Hard Disk and a floppy disk or flash ROM. Besides these a 650 MB CDROM is also provided in PCs intended for multimedia use. Another company called Apple also makes pCs. Apple PCs are known as Apple Macintosh. They use Apple’s proprietary OS, which is designed for simplicity of use. Apple Macintosh machines used Motorola 68030 microprocessors but now use Power PC 603 processor. IBM PCs are today the most popular computers with millions of them in use throughout the world.
Workstations:
Workstations are also desktop machines. They are, however, more powerful providing processorspeeds about 10 times that of PCs. Most workstations have a large colour video display unit (19 inch monitors). Normally they have main memory of around 256 MB to 4 GB and Hard Disk of 80 to 320 GB. Workstations normally use RISC processors such as MIPS (SIG), RIOS (IBM), SPARC (SUN), or PA-RISC (HP). Some manufacturers of Workstations are Silicon Graphics (SIG), IBM, SUN Microsystems and Hewlett Packard (HP). The standard Operating System of Workstations is UNIX and its derivatives such as AIX (IBM), Solaris (SUN), and HP-UX (HP). Very good graphics facilities and large video screens are provided by most Workstations. A system called X WINDOWS is provided by Workstations to display the status of multiple processes during their execution. Most Workstations have built-in hardware to connect to a Local Area Network (LAN). Workstations are used for executing numeric and graphic intensive applications such as those, which arise in Computer Aided Design, simulation of complex systems and visualizing the results of simulation.
Servers
While manufacturers such as IBM, SUN and Silicon Graphics have been manufacturing high performance workstations the speed of Intel Pentium Processors has been going up. In 2004, Pentium with clock speed 3 GHz are available. They can support several GB main memories. Thus the difference between high end PCs and Workstations is vanishing. Today companies such as SUN make Intel based workstations.While Workstations are characterized by high performance processors with large screens for interactive programming, servers are used for specific purpose such as high performance numerical computing (called compute server), web page hosting, database store, printing etc. interactive large screens are not necessary. Compute servers have performance processors with large main memory, database servers have big on-line disk storage (100s of GB) and print servers support several high speed printers.
Mainframes Computers
There are organizations such as banks and insurance companies process large number of transactions on-line. They require computers with very large disks to store several Terabytes of data and transfer data from disk to main memory at several hundred Megabytes/sec. The processing power needed from such computers is hundred million transactions per second. These computers are much bigger and faster than workstations and several hundred times more expensive. They normally use proprietary operating systems, which usually provide high expensive services such as user accounting, file security and control. They are normally much more reliable when compared to Operating System on PCs. These types of computers are called mainframes. These are a few manufacturers of mainframes (e.g., IBM and Hitachi). The number of mainframe users has reduced as many organizations are rewriting their systems to use networks of powerful workstations.
Supercomputers
Supercomputers are the fastest computers available at any given time and are normally used to solve problems, which require intensive numerical computations. Examples of such problems are numerical weather prediction, designing supersonic aircrafts, design of drugs and modeling complex molecules. All of these problems require around 10^16calculations to be performed. These problems will be solved in about 3 hours by a computer, which can carry out a trillion floating point calculations per second. Such a computer is classifieds as supercomputer today (2004). By about the year 2006 computers which can carry out 10^15 floating point operations per second on 64 bit floating point numbers would be available and would be the ones which be called supercomputers. Interconnecting several high speed computers and programming them to work cooperatively to solve problems build supercomputers. Recently applications of supercomputers have expanded beyond scientific computing, they are now used to analyze large commercial database, produced animated movies and play games such as chess.Besides arithmetic speed, a computer to be classified as a supercomputer should have a large main memory of around 16 GB and a secondary memory of 1000 GB. The speed of transfer of data from secondary memory to the main memory should be at least a tenth of the memory to CPU data transfer speed. All supercomputers use parallelism to achieve their speed. In Sec. 12.9 we discuss the organization of parallel computers
computer history and generations
First Generation - 1940-1956: Vacuum Tubes
The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. A magnetic drum,also referred to as drum, is a metal cylinder coated with magnetic iron-oxide material on which data and programs can be stored. Magnetic drums were once use das a primary storage device but have since been implemented as auxiliary storage devices.
The tracks on a magnetic drum are assigned to channels located around the circumference of the drum, forming adjacent circular bands that wind around the drum. A single drum can have up to 200 tracks. As the drum rotates at a speed of up to 3,000 rpm, the device's read/write heads deposit magnetized spots on the drum during the write operation and sense these spots during a read operation. This action is similar to that of a magnetic tape or disk drive.
They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions. First generation computers relied on machine language to perform operations, and they could only solve one problem at a time. Machine languages are the only languages understood by computers. While easily understood by computers, machine languages are almost impossible for humans to use because they consist entirely of numbers. Computer Programmers, therefore, use either high level programming languages or an assembly language programming. An assembly language contains the same instructions as a machine language, but the instructions and variables have names instead of being just numbers.
Programs written in high level programming languages retranslated into assembly language or machine language by a compiler. Assembly language program retranslated into machine language by a program called an assembler (assembly language compiler).
Every CPU has its own unique machine language. Programs must be rewritten or recompiled, therefore, to run on different types of computers. Input was based onpunch card and paper tapes, and output was displayed on printouts.
The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first commercial computerdelivered to a business client, the U.S. Census Bureau in 1951.
Acronym for Electronic Numerical Integrator And Computer, the world's first operational electronic digital computer, developed by Army Ordnance to compute World War II ballistic firing tables. The ENIAC, weighing 30 tons, using 200 kilowatts of electric power and consisting of 18,000 vacuum tubes,1,500 relays, and hundreds of thousands of resistors,capacitors, and inductors, was completed in 1945. In addition to ballistics, the ENIAC's field of application included weather prediction, atomic-energy calculations, cosmic-ray studies, thermal ignition,random-number studies, wind-tunnel design, and other scientific uses. The ENIAC soon became obsolete as the need arose for faster computing speeds.
Second Generation - 1956-1963: Transistors
Transistors replaced vacuum tubes and ushered in the second generation computer. Transistor is a device composed of semiconductor material that amplifies a signal or opens or closes a circuit. Invented in 1947 at Bell Labs, transistors have become the key ingredient of all digital circuits, including computers. Today's latest microprocessorcontains tens of millions of microscopic transistors.
Prior to the invention of transistors, digital circuits were composed of vacuum tubes, which had many disadvantages. They were much larger, required more energy, dissipated more heat, and were more prone to failures. It's safe to say that without the invention of transistors, computing as we know it today would not be possible.
The transistor was invented in 1947 but did not see widespread use in computers until the late 50s. The transistor was far superior to the vacuum tube,allowing computers to become smaller, faster, cheaper,more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output.
Second-generation computers moved from cryptic binary machine language to symbolic, or assembly, languages,which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology.
The first computers of this generation were developed for the atomic energy industry.
Third Generation - 1964-1971: Integrated Circuits
The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers.
A nonmetallic chemical element in the carbon family of elements. Silicon - atomic symbol "Si" - is the second most abundant element in the earth's crust, surpassed only by oxygen. Silicon does not occur uncombined in nature. Sand and almost all rocks contain silicon combined with oxygen, forming silica. When silicon combines with other elements, such as iron, aluminum or potassium, a silicate is formed. Compounds of silicon also occur in the atmosphere, natural waters,many plants and in the bodies of some animals.
Silicon is the basic material used to make computer chips, transistors, silicon diodes and other electronic circuits and switching devices because its atomic structure makes the element an ideal semiconductor. Silicon is commonly doped, or mixed,with other elements, such as boron, phosphorous and arsenic, to alter its conductive properties.
A chip is a small piece of semi conducting material(usually silicon) on which an integrated circuit is embedded. A typical chip is less than ¼-square inches and can contain millions of electronic components(transistors). Computers consist of many chips placed on electronic boards called printed circuit boards. There are different types of chips. For example, CPU chips (also called microprocessors) contain an entire processing unit, whereas memory chips contain blank memory.
Semiconductor is a material that is neither a good conductor of electricity (like copper) nor a good insulator (like rubber). The most common semiconductor materials are silicon and germanium. These materials are then doped to create an excess or lack of electrons.
Computer chips, both for CPU and memory, are composed of semiconductor materials. Semiconductors make it possible to miniaturize electronic components, such as transistors. Not only does miniaturization mean that the components take up less space, it also means that they are faster and require less energy.
Related Article: History Behind It All
Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.
Fourth Generation - 1971-Present: Microprocessors
The microprocessor brought the fourth generation of computers, as thousands of integrated circuits we rebuilt onto a single silicon chip. A silicon chip that contains a CPU. In the world of personal computers,the terms microprocessor and CPU are used interchangeably. At the heart of all personal computers and most workstations sits a microprocessor. Microprocessors also control the logic of almost all digital devices, from clock radios to fuel-injection systems for automobiles.
Three basic characteristics differentiate microprocessors:
- Instruction Set: The set of instructions that the microprocessor can execute.
- Bandwidth: The number of bits processed in a single instruction.
- Clock Speed: Given in megahertz (MHz), the clock speed determines how many instructions per second the processor can execute.
In both cases, the higher the value, the more powerful the CPU. For example, a 32-bit microprocessor that runs at 50MHz is more powerful than a 16-bitmicroprocessor that runs at 25MHz.
What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004chip, developed in 1971, located all the components of the computer - from the central processing unit and memory to input/output controls - on a single chip.
Abbreviation of central processing unit, and pronounced as separate letters. The CPU is the brains of the computer. Sometimes referred to simply as the processor or central processor, the CPU is where most calculations take place. In terms of computing power,the CPU is the most important element of a computer system.
On large machines, CPUs require one or more printed circuit boards. On personal computers and small workstations, the CPU is housed in a single chip called a microprocessor.
Two typical components of a CPU are:
- The arithmetic logic unit (ALU), which performs arithmetic and logical operations.
- The control unit, which extracts instructions from memory and decodes and executes them, calling on the ALU when necessary.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors.
As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUI's, the mouse and handheld devices
Fifth Generation - Present and Beyond: Artificial Intelligence
Fifth generation computing devices, based on artificial intelligence, are still in development,though there are some applications, such as voice recognition, that are being used today.
Artificial Intelligence is the branch of computer science concerned with making computers behave like humans. The term was coined in 1956 by John McCarthy at the Massachusetts Institute of Technology. Artificial intelligence includes:
- Games Playing: programming computers to play games such as chess and checkers
- Expert Systems: programming computers to make decisions in real-life situations (for example, some expert systems help doctors diagnose diseases based on symptoms)
- Natural Language: programming computers to understand natural human languages
- Neural Networks: Systems that simulate intelligence by attempting to reproduce the types of physical connections that occur in animal brains
- Robotics: programming computers to see and hear and react to other sensory stimuli
Currently, no computers exhibit full artificial intelligence (that is, are able to simulate human behavior). The greatest advances have occurred in the field of games playing. The best computer chess programs are now capable of beating humans. In May,1997, an IBM super-computer called Deep Blue defeated world chess champion Gary Kasparov in a chess match.
In the area of robotics, computers are now widely used in assembly plants, but they are capable only of very limited tasks. Robots have great difficulty identifying objects based on appearance or feel, and they still move and handle objects clumsily.
Natural-language processing offers the greatest potential rewards because it would allow people to interact with computers without needing any specialized knowledge. You could simply walk up to a computer and talk to it. Unfortunately, programming computers to understand natural languages has proved to be more difficult than originally thought. Some rudimentary translation systems that translate from one human language to another are in existence, but they are not nearly as good as human translators.
There are also voice recognition systems that can convert spoken sounds into written words, but they do not understand what they are writing; they simply take dictation. Even these systems are quite limited -- you must speak slowly and distinctly.
In the early 1980s, expert systems were believed to represent the future of artificial intelligence and of computers in general. To date, however, they have not lived up to expectations. Many expert systems help human experts in such fields as medicine and engineering, but they are very expensive to produce and are helpful only in special situations.
Today, the hottest area of artificial intelligence is neural networks, which are proving successful in an umber of disciplines such as voice recognition and natural-language processing.
There are several programming languages that are known as AI languages because they are used almost exclusively for AI applications. The two most common are LISP and Prolog.
Related Article: Discover Computer History
Voice Recognition
The field of computer science that deals with designing computer systems that can recognize spoken words. Note that voice recognition implies only that the computer can take dictation, not that it understands what is being said. Comprehending human languages falls under a different field of computer science called natural language processing. A number of voice recognition systems are available on the market. The most powerful can recognize thousands of words. However, they generally require an extended training session during which the computer system becomes accustomed to a particular voice and accent.Such systems are said to be speaker dependent.
Many systems also require that the speaker speak slowly and distinctly and separate each word with a short pause. These systems are called discrete speech systems. Recently, great strides have been made in continuous speech systems -- voice recognition systems that allow you to speak naturally. There are now several continuous-speech systems available for personal computers.
Because of their limitations and high cost, voice recognition systems have traditionally been used only in a few specialized situations. For example, such systems are useful in instances when the user is unable to use a keyboard to enter data because his or her hands are occupied or disabled. Instead of typing commands, the user can simply speak into a headset. Increasingly, however, as the cost decreases and performance improves, speech recognition systems are entering the mainstream and are being used as an alternative to keyboards.
The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Parallel processing is the simultaneous use of more than one CPU to execute a program. Ideally, parallel processing makes a program run faster because there are more engines (CPUs) running it. In practice, it is often difficult to divide a program in such a way that separate CPUs can execute different portions without interfering with each other.
Most computers have just one CPU, but some models have several. There are even computers with thousands of CPUs. With single-CPU computers, it is possible to perform parallel processing by connecting the computers in a network. However, this type of parallel processing requires very sophisticated software called distributed processing software.
Note that parallel processing differs from multitasking, in which a single CPU executes several programs at once.
Parallel processing is also called parallel computing.
Quantum computation and molecular and nano-technology will radically change the face of computers in years to come. First proposed in the 1970s, quantum computing relies on quantum physics by taking advantage of certain quantum physics properties of atoms or nuclei that allow them to work together as quantum bits, or qubits, to be the computer's processor and memory. By interacting with each other while being isolated from the external environment,qubits can perform certain calculations exponentially faster than conventional computers.
Qubits do not rely on the traditional binary nature of computing. While traditional computers encode information into bits using binary numbers, either a 0or 1, and can only do calculations on one set of numbers at once, quantum computers encode information as a series of quantum-mechanical states such as spin directions of electrons or polarization orientations of a photon that might represent a 1 or a 0, might represent a combination of the two or might represent a number expressing that the state of the qubit is somewhere between 1 and 0, or a superposition of many different numbers at once. A quantum computer can doan arbitrary reversible classical computation on all the numbers simultaneously, which a binary system cannot do, and also has some ability to produce interference between various different numbers. By doing a computation on many different numbers at once,then interfering the results to get a single answer, a quantum computer has the potential to be much more powerful than a classical computer of the same size.In using only a single processing unit, a quantum computer can naturally perform myriad operations in parallel.
Quantum computing is not well suited for tasks such as word processing and email, but it is ideal for tasks such as cryptography and modeling and indexing very large databases.
Nanotechnology is a field of science whose goal is to control individual atoms and molecules to create computer chips and other devices that are thousands of times smaller than current technologies permit. Current manufacturing processes use lithography to imprint circuits on semiconductor materials. While lithography has improved dramatically over the last two decades -- to the point where some manufacturing plants can produce circuits smaller than one micron(1,000 nanometers) -- it still deals with aggregates of millions of atoms. It is widely believed that lithography is quickly approaching its physical limits. To continue reducing the size of semiconductors, new technologies that juggle individual atoms will be necessary. This is the realm of nanotechnology.
Although research in this field dates back to Richard P. Feynman's classic talk in 1959, the term nanotechnology was first coined by K. Eric Drexler in1986 in the book Engines of Creation.
In the popular press, the term nanotechnology is sometimes used to refer to any sub-micron process,including lithography. Because of this, many scientists are beginning to use the term molecular nanotechnology when talking about true nanotechnology at the molecular level.
The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization.
Here natural language means a human language. For example, English, French, and Chinese are natural languages. Computer languages, such as FORTRAN and C,are not.
Probably the single most challenging problem in computer science is to develop computers that can understand natural languages. So far, the complete solution to this problem has proved elusive, although great deal of progress has been made. Fourth-generation languages are the programming languages closest to natural languages.
Now that you've gotten free know-how on this topic, try to grow your skills even faster with online video training. Then finally, put these skills to the test and make a name for yourself by offering these skills to others by becoming a freelancer. There are literally 2000+ new projects that are posted every single freakin' day, no lie!
Subscribe to:
Posts (Atom)