Other portable computing devices

There are several categories of portable computing devices that can run on batteries but are not usually classified as laptops: portable computers, keyboardless tablet PCs, Internet tablets, PDAs, handheld computers (UMPCs) and smartphones.

A keyboard-less tablet PC
A Palm TX PDA
A Nokia N800 Internet tablet
An OQO handheld computer
An Apple iPhone smartphone

A Portable computer is a general-purpose computer that can be easily moved from place to place, but cannot be used while in transit, usually because it requires some "setting-up" and an AC power source. The most famous example is the Osborne 1. Also called a "transportable" or a "luggable" PC.

A Tablet PC that lacks a keyboard (also known as a non-convertible Tablet PC) is shaped like slate or a paper notebook, features a touchscreen with a stylus and handwriting recognition software. Tablets may not be best suited for applications requiring a physical keyboard for typing, but are otherwise capable of carrying out most tasks that an ordinary laptop would be able to perform.

An Internet tablet is an Internet appliance in tablet form. Unlike a Tablet PC, an Internet tablet does not have much computing power and its applications suite is limited - it can not replace a general purpose computer. Internet tablets typically feature an MP3 and video player, a web browser, a chat application and a picture viewer.

A Personal digital assistant (PDA) is a small, usually pocket-sized, computer with limited functionality. It is intended to supplement and to synchronize with a desktop computer, giving access to contacts, address book, notes, e-mail and other features.

A Handheld computer, also known as an Ultra Mobile PC (UMPC) is a full-featured, PDA-sized computer running a general-purpose operating system.

A Smart phone is a PDA with an integrated cellphone functionality. Current smartphones have a wide range of features and installable applications.

Boundaries that separate these categories are blurry at times. For example, the OQO UMPC is also a PDA-sized tablet PC; the Apple eMate had the clamshell form factor of a laptop, but ran PDA software. The HP Omnibook line of laptops included some devices small enough to be called handheld computers. The hardware of the Nokia 770 internet tablet is essentially the same as that of a PDA such as the Zaurus 6000; the only reason it's not called a PDA is that it doesn't have PIM software. On the other hand, both the 770 and the Zaurus can run some desktop Linux software, usually with modifications.

[edit] Major brands and manufacturers

There is a multitude of laptop brands and manufacturers; several major brands, offering notebooks in various classes, are listed in the box to the right.

The major brands usually offer good service and support, including well-executed documentation and driver downloads that will remain available for many years after a particular laptop model is no longer produced. Capitalizing on service, support and brand image, laptops from major brands are more expensive than laptops by smaller brands and ODMs.

Some brands are specializing in a particular class of laptops, such as gaming laptops (Alienware), netbooks (EeePC) and laptops for children (OLPC).

Many brands, including the major ones, do not design and do not manufacture their laptops. Instead, a small number of Original Design Manufacturers (ODMs) design new models of laptops, and the brands choose the models to be included in their lineup. In 2006, 7 major ODMs manufactured 7 of every 10 laptops in the world, with the largest one (Quanta Computer) having 30% world market share.[41] Therefore, there often are identical models available both from a major label and from a low-profile ODM in-house brand.

Read more...

Netbook

An Asus Eee PC netbook

A netbook is a small laptop designed for portability and low price, with a performance inferior to that of a standard notebook yet adequate for surfing on the Internet and basic word processing. Netbooks use 10" and smaller screens, weigh 0.6 to 1.2 kg (1.5 to 3 pounds), and are generally powered by a CPU from one of the low-cost families with a high performance-to-power ratio such as Intel Atom, Celeron ULV, or VIA C7 processors.[14]

Netbooks use general-purpose operating systems such as Linux or Windows XP. Some models use small-capacity (4 to 40 Gb) SSD drives instead of the usual HDDs to save weight and battery power.

Read more...

Durability

A clogged heatsink on a 2.5 year old laptop.

Due to their portability, laptops are subject to more wear and physical damage than desktops. Components such as screen hinges, latches, power jacks[38] and power cords deteriorate gradually due to ordinary use. A liquid spill onto the keyboard, a rather minor mishap with a desktop system, can damage the internals of a laptop and result in a costly repair. One study found that a laptop is 3 times more likely to break during the first year of use than a desktop.[39]

Original external components are expensive (a replacement AC adapter, for example, could cost $75); other parts are inexpensive - a power jack can cost a few dollars - but their replacement may require extensive disassembly and reassembly of the laptop by a technician. Other inexpensive but fragile parts often cannot be purchased separate from larger more expensive components.[40] The repair costs of a failed motherboard or LCD panel may exceed the value of a used laptop.

Laptops rely on extremely compact cooling systems involving a fan and heat sink that can fail due to eventual clogging by accumulated airborne dust and debris. Most laptops do not have any sort of removable dust collection filter over the air intake for these cooling systems, resulting in a system that gradually runs hotter and louder as the years pass. Eventually the laptop starts to overheat even at idle load levels. This dust is usually stuck inside where casual cleaning and vacuuming cannot remove it. Instead, a complete disassembly is needed to clean the laptop.

Battery life of laptops is limited; the capacity drops with time, necessitating an eventual replacement after a few years.

Read more...

Ergonomics and health

Laptop coaster preventing heating of lap and improving laptop airflow.

Because of their small and flat keyboard and trackpad pointing devices, prolonged use of laptops can cause RSI.[35] Usage of ergonomic keyboards and pointing devices is recommended to prevent injury when working for long periods of time; they can be connected to a laptop easily by USB or via a docking station. Some health standards require ergonomic keyboards at workplaces.

The integrated screen often causes users to hunch over for a better view, which can cause neck or spinal injuries. A larger and higher-quality external screen can be connected to almost any laptop to alleviate that and to provide additional "screen estate" for more productive work.

A study by State University of New York researchers found that heat generated from laptops can raise the temperature of the scrotum, potentially putting sperm count at risk. The small study, which included little more than two dozen men aged 13 to 35, found that the sitting position required to balance a laptop can raise scrotum temperature by as much as 2.1 °C (3.8 °F). Heat from the laptop itself can raise the temperature by another 0.7 °C (1.4 °F), bringing the potential total increase to 2.8 °C (5.2 °F). However, further research is needed to determine whether this directly affects sterility in men.[36]

A common practical solution to this problem is to place the laptop on a table or desk. Another solution is to obtain a cooling unit for the laptop, these units are usually USB powered consist of a hard thin plastic case housing 1, 2 or 3 cooling fans (the whole thing is designed to sit under a laptop) which results in the laptop remaining cool to the touch, and greatly reduces laptop heat generation. There are several companies which make these coolers.

Heat from using a laptop on the lap can also cause skin discoloration on the thighs.[37]

Read more...

Rugged Laptop

A rugged (or ruggedized) laptop is designed to reliably operate in harsh usage conditions such as strong vibrations, extreme temperatures and wet or dusty environments. Rugged laptops are usually designed from scratch, rather than adapted from regular consumer laptop models. Rugged notebooks are bulkier, heavier, and much more expensive than regular laptops[15], and thus are seldom seen in regular consumer use.

The design features found in rugged laptops include rubber sheeting under the keyboard keys, sealed port and connector covers, passive cooling, superbright displays easily readable in daylight, cases and frames made of magnesium alloys or have a magnesium alloy rollcage[16] that are much stronger than plastic found in commercial laptops and solid-state storage devices or hard disc drives that are shock mounted to withstand constant vibrations. Rugged laptops are commonly used by public safety services (police, fire and medical emergency), military, utilities, field service technicians, construction, mining and oil drilling personnel. Rugged laptops are usually sold to organizations, rather than individuals, and are rarely marketed via retail channels.

Read more...

Components

Miniaturization: a comparison of a desktop computer motherboard (ATX form factor) to a motherboard from a 13" laptop (2008 unibody Macbook)
Inner view of a Sony Vaio laptop

The basic components of laptops are similar in function to their desktop counterparts, but are miniaturized, adapted to mobile use, and designed for low power consumption. Because of the additional requirements, laptop components have worse performance than desktop parts of comparable price. Furthermore, the design bounds on power, size, and cooling of laptops limit the maximum performance of laptop parts compared to that of desktop components. [17]

The following list summarizes the differences and distinguishing features of laptop components in comparison to desktop personal computer parts:

  • Motherboard - laptop motherboards are highly make- and model-specific, and do not conform to a desktop form factor. Unlike a desktop board that usually has several slots for expansion cards (3 to 7 are common), a board for a small, highly integrated laptop may have no expansion slots at all, with all the functionality implemented on the motherboard itself; the only expansion possible in this case is via an external port such as USB. Other boards may have one or more standard or proprietary expansion slots. Several other functions (storage controllers, networking, sound card and external ports) are implemented on the motherboard.[18]
A SODIMM memory module.
  • Memory (RAM) - SO-DIMM memory modules that are usually found in laptops are about half the size of desktop DIMMs.[18] They may be accessible from the bottom of the laptop for ease of upgrading, or placed in locations not intended for user replacement such as between the keyboard and the motherboard.
  • Expansion cards - A PC Card (formerly PCMCIA) or ExpressCard bay for expansion cards is often present on laptops to allow adding and removing functionality, even when the laptop is powered on. Some subsystems (such as Wi-Fi or a cellular modem) can be implemented as replaceable internal expansion cards, usually accessible under an access cover on the bottom of the laptop. Two popular standards for such cards are MiniPCI and its successor, the PCI Express Mini. [21]
  • Power supply - laptops are powered by an internal rechargeable battery that is charged using an external power supply. The power supply can charge the battery and power the laptop simultaneously; when the battery is fully charged, the laptop continues to run on AC power. The charger adds about 400 grams (1 lb) to the overall "transport weight" of the notebook.
  • Battery - Current laptops utilize lithium ion batteries, with more recent models using the new lithium polymer technology. These two technologies have largely replaced the older nickel metal-hydride batteries. Typical battery life for standard laptops is two to five hours of light-duty use, but may drop to as little as one hour when doing power-intensive tasks. Batteries' performance gradually decreases with time, leading to an eventual replacement in one to five years, depending on the charging and discharging pattern. This large-capacity main battery should not be confused with the much smaller battery nearly all computers use to run the real-time clock and to store the BIOS configuration in the CMOS memory when the computer is off.
  • Video display controller - on standard laptops video controller is usually integrated into the chipset. This tends to limit the use of laptops for gaming and entertainment, two fields which have constantly escalating hardware demands[22]. Higher-end laptops and desktop replacements in particular often come with dedicated graphics processors on the motherboard or as an internal expansion card. These mobile graphics processors are comparable in performance to mainstream desktop graphic accelerator boards.[23]
  • Display - Most modern laptops feature 12 inch (30 cm) or larger color active matrix displays with resolutions of 1024×768 pixels and above. Many current models use screens with higher resolution than typical for desktop PCs (for example, the 1440×900 resolution of a 15" Macbook Pro[24] can be found on 19" widescreen desktop monitors).
A size comparison of 3.5" and 2.5" hard disk drives
  • Removable media drives - a DVD/CD reader/writer drive is standard. CD drives are becoming rare, while Blu-Ray is not yet common on notebooks[25]. Many ultraportables and netbooks either move the removable media drive into the docking station or exclude it altogether.
  • Internal storage - Hard disks are physically smaller—2.5 inch (60 mm) or 1.8 inch (46 mm) —compared to desktop 3.5 inch (90 mm) drives. Some new laptops (usually ultraportables) employ more expensive, but faster, lighter and power-efficient Flash memory-based SSDs instead. Currently, 250 to 320 Gb sizes are common for laptop hard disks (64 to 128 Gb for SSDs).
  • Input - A pointing stick, touchpad or both are used to control the position of the cursor on the screen, and an integrated keyboard is used for typing. External keyboard and mouse may be connected using USB or PS/2 (if present).

Read more...

Replacement Computer

An Apple 17" MacBook Pro is often used as a desktop replacement.

A desktop replacement computer is a laptop that provides most of the capabilities of a desktop computer, with a similar level of performance. Desktop replacements are usually larger and heavier than standard laptops. They contain more powerful components and numerous ports, and have a 15.4" or larger display. Because of their bulk, they are not as portable as other laptops and their operation time on batteries is typically shorter.[11]

Some laptops in this class use a limited range of desktop components to provide better performance for the same price at the expense of battery life; in a few of those models, there is no battery at all, and the laptop can only be used when plugged in. These are sometimes called desknotes, a portmanteau of the words "desktop" and "notebook," though the term can also be applied to desktop replacement computers in general.[12]

The names "Media Center Laptops" and "Gaming Laptops" are also used to describe this class of notebooks.[10]

Read more...

Laptop

Jump to: navigation, search
An ultraportable IBM X31 with 12" screen on an IBM T43 Thin & Light laptop with a 14" screen

A laptop computer, also known as a notebook computer, is a small personal computer designed for mobile use. A laptop integrates all of the typical components of a desktop computer, including a display, a keyboard, a pointing device (a touchpad, also known as a trackpad, or a pointing stick) and a battery into a single portable unit. The rechargeable battery is charged from an AC/DC adapter and has enough capacity to power the laptop for several hours.

A laptop is usually shaped like a large notebook with thickness of 0.7–1.5 inches (18–38 mm) and dimensions ranging from 10x8 inches (27x22cm, 13" display) to 15x11 inches (39x28cm, 17" display) and up. Modern laptops weigh 3 to 12 pounds (1.4 to 5.4 kg), and some older laptops were even heavier. Most laptops are designed in the flip form factor to protect the screen and the keyboard when closed.

Originally considered "a small niche market"[1] and perceived as suitable for "specialized field applications" such as "the military, the Internal Revenue Service, accountants and sales representatives"[1][2], battery-powered portables had just 2% worldwide market share in 1986[3]. But today, there are already more laptops than desktops in the enterprise[4] and, according to a forecast by Intel, more laptops than desktops will be sold in the general PC market as soon as 2009[5].

Read more...

IBM 1400 series

An IBM 7040 installation that used an IBM 1401 for I/O support. The 1401 is partially shown at the far lower right, with a 1402 card reader/punch behind it. An IBM 1403 printer is in the front center of the picture.
The IBM 1400 series were second generation (transistorized) mid-range business computers that IBM sold in the early 1960s. They could be operated as an independent systems, in conjunction with IBM punched card equipment, or as auxiliary equipment to other computer systems.

1400-series machines stored information in magnetic cores as variable length character strings terminated by a special flag. Arithmetic was performed character-by-character. Input and output was on punch card, magnetic tape and high speed line printers. Disk storage was also available.

Read more...

Software engineering

Jump to: navigation, search
The new Airbus A-380 uses a substantial amount of software to create a "paperless" cockpit.
A typical software engineer's office.
Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, and the study of these approaches. That is the application of engineering to software. [1]

The term software engineering first appeared in the 1968 NATO Software Engineering Conference and was meant to provoke thought regarding the current "software crisis" at the time.[2] Since then, it has continued as a profession and field of study dedicated to creating software that is of higher quality, cheaper, maintainable, and quicker to build. Since the field is still relatively young compared to its sister fields of engineering, there is still much work and debate around what software engineering actually is. It has grown organically out of the limitations of viewing software as just programming. Software development is a term sometimes preferred by practitioners in the industry who view software engineering as too heavy-handed and constrictive to the malleable process of creating software.

Yet, in spite of its youth as a profession, the field's future looks bright as Money Magazine and Salary.com rated software engineering as the best job in America in 2006. [3]

Read more...

History of programming

Wired plug board for an IBM 402 Accounting Machine.
The concept of devices that operate following a pre-defined set of instructions traces back to Greek Mythology, notably Hephaestus and his mechanical servants[3]. The Antikythera mechanism was a calculator utilizing gears of various sizes and configuration to determine its operation. The earliest known programmable machines (machines whose behavior can be controlled and predicted with a set of instructions) were Al-Jazari's programmable Automata in 1206.[4] One of Al-Jazari's robots was originally a boat with four automatic musicians that floated on a lake to entertain guests at royal drinking parties. Programming this mechanism's behavior meant placing pegs and cams into a wooden drum at specific locations. These would then bump into little levers that operate a percussion instrument. The output of this device was a small drummer playing various rhythms and drum patterns.[5][6] Another sophisticated programmable machine by Al-Jazari was the castle clock, notable for its concept of variables which the operator could manipulate as necessary (i.e. the length of day and night). The Jacquard Loom, which Joseph Marie Jacquard developed in 1801, uses a series of pasteboard cards with holes punched in them. The hole pattern represented the pattern that the loom had to follow in weaving cloth. The loom could produce entirely different weaves using different sets of cards. Charles Babbage adopted the use of punched cards around 1830 to control his Analytical Engine. The synthesis of numerical calculation, predetermined operation and output, along with a way to organize and input instructions in a manner relatively easy for humans to conceive and produce, led to the modern development of computer programming.

Development of computer programming accelerated through the Industrial Revolution. The punch card innovation was later refined by Herman Hollerith who, in 1896 founded the Tabulating Machine Company (which became IBM). He invented the Hollerith punched card, the card reader, and the key punch machine. These inventions were the foundation of the modern information processing industry. The addition of a plug-board to his 1906 Type I Tabulator allowed it to do different jobs without having to be physically rebuilt. By the late 1940s there were a variety of plug-board programmable machines, called unit record equipment, to perform data processing tasks (card reading). Early computer programmers used plug-boards for the variety of complex calculations requested of the newly invented machines.
Data and instructions could be stored on external punch cards, which were kept in order and arranged in program decks.

The invention of the Von Neumann architecture allowed computer programs to be stored in computer memory. Early programs had to be painstakingly crafted using the instructions of the particular machine, often in binary notation. Every model of computer would be likely to need different instructions to do the same task. Later assembly languages were developed that let the programmer specify each instruction in a text format, entering abbreviations for each operation code instead of a number and specifying addresses in symbolic form (e.g. ADD X, TOTAL). In 1954 Fortran, the first higher level programming language, was invented. This allowed programmers to specify calculations by entering a formula directly (e.g. Y = X*2 + 5*X + 9). The program text, or source, was converted into machine instructions using a special program called a compiler. Many other languages were developed, including ones for commercial programming, such as COBOL. Programs were mostly still entered using punch cards or paper tape. (See computer programming in the punch card era). By the late 1960s, data storage devices and computer terminals became inexpensive enough so programs could be created by typing directly into the computers. Text editors were developed that allowed changes and corrections to be made much more easily than with punch cards.

As time has progressed, computers have made giant leaps in the area of processing power. This has brought about newer programming languages that are more abstracted from the underlying hardware. Although these more abstracted languages require additional overhead, in most cases the huge increase in speed of modern computers has brought about little performance decrease compared to earlier counterparts. The benefits of these more abstracted languages is that they allow both an easier learning curve for people less familiar with the older lower-level programming languages, and they also allow a more experienced programmer to develop simple applications quickly. Despite these benefits, large complicated programs, and programs that are more dependent on speed still require the faster and relatively lower-level languages with today's hardware. (The same concerns were raised about the original Fortran language.)

Throughout the second half of the twentieth century, programming was an attractive career in most developed countries. Some forms of programming have been increasingly subject to offshore outsourcing (importing software and services from other countries, usually at a lower wage), making programming career decisions in developed countries more complicated, while increasing economic opportunities in less developed areas. It is unclear how far this trend will continue and how deeply it will impact programmer wages and opportunities.

Read more...

Types of Trojan horse payloads

Trojan horse payloads are almost always designed to cause harm, but can also be harmless. They are classified based on how they breach and damage systems. The six main types of Trojan horse payloads are:

* Remote Accessing
* Data Destruction
* Downloader
* Server Trojan(Proxy, FTP , IRC, Email, HTTP/HTTPS, etc.)
* Security software disabler
* Denial-of-service attack (DoS)

Some examples of damage are:

* Erasing or overwriting data on a computer
* Re-installing itself after being disabled
* Encrypting files in a cryptoviral extortion attack
* Corrupting files in a subtle way
* Upload and download of files
* Copying fake links, which lead to false websites, chats, or other account based websites, showing any local account name on the computer falsely engaging in untrue context
* Falsifying records of downloading software, movies, or games from websites never visited by the victim.
* Allowing remote access to the victim's computer. This is called a RAT (remote access trojan)
* Spreading other malware, such as viruses (this type of trojan horse is called a 'dropper' or 'vector')
* Setting up networks of zombie computers in order to launch DDoS attacks or send spam.
* Spying on the user of a computer and covertly reporting data like browsing habits to other people (see the article on spyware)
* Making screenshots
* Logging keystrokes to steal information such as passwords and credit card numbers
* Phishing for bank or other account details, which can be used for criminal activities
* Installing a backdoor on a computer system
* Opening and closing CD-ROM tray
* Playing sounds, videos or displaying images
* Calling using the modem to expensive numbers, thus causing massive phone bills
* Harvesting e-mail addresses and using them for spam
* Restarting the computer whenever the infected program is started
* Deactivating or interfering with anti-virus and firewall programs
* Deactivating or interfering with other competing forms of malware
* Randomly shutting off the computer
* Installing a virus
* Slowing down your computer
* Displaying pornographic sites

Read more...

Resident viruses

Resident viruses contain a replication module that is similar to the one that is employed by nonresident viruses. However, this module is not called by a finder module. Instead, the virus loads the replication module into memory when it is executed and ensures that this module is executed each time the operating system is called to perform a certain operation. For example, the replication module can be called each time the operating system executes a file. In this case, the virus infects every suitable program that is executed on the computer.

Resident viruses are sometimes subdivided into a category of fast infectors and a category of slow infectors. Fast infectors are designed to infect as many files as possible. For instance, a fast infector can infect every potential host file that is accessed. This poses a special problem to anti-virus software, since a virus scanner will access every potential host file on a computer when it performs a system-wide scan. If the virus scanner fails to notice that such a virus is present in memory, the virus can "piggy-back" on the virus scanner and in this way infect all files that are scanned. Fast infectors rely on their fast infection rate to spread. The disadvantage of this method is that infecting many files may make detection more likely, because the virus may slow down a computer or perform many suspicious actions that can be noticed by anti-virus software. Slow infectors, on the other hand, are designed to infect hosts infrequently. For instance, some slow infectors only infect files when they are copied. Slow infectors are designed to avoid detection by limiting their actions: they are less likely to slow down a computer noticeably, and will at most infrequently trigger anti-virus software that detects suspicious behavior by programs. The slow infector approach does not seem very successful, however.

Read more...

Infection strategies

In order to replicate itself, a virus must be permitted to execute code and write to memory. For this reason, many viruses attach themselves to executable files that may be part of legitimate programs. If a user tries to start an infected program, the virus' code may be executed first. Viruses can be divided into two types, on the basis of their behavior when they are executed. Nonresident viruses immediately search for other hosts that can be infected, infect these targets, and finally transfer control to the application program they infected. Resident viruses do not search for hosts when they are started. Instead, a resident virus loads itself into memory on execution and transfers control to the host program. The virus stays active in the background and infects new hosts when those files are accessed by other programs or the operating system itself.

Read more...

Computer virus

A computer virus is a computer program that can copy itself and infect a computer without permission or knowledge of the user. The term "virus" is also commonly used, albeit erroneously, to refer to many different types of malware and adware programs. The original virus may modify the copies, or the copies may modify themselves, as occurs in a metamorphic virus. A virus can only spread from one computer to another when its host is taken to the uninfected computer, for instance by a user sending it over a network or the Internet, or by carrying it on a removable medium such as a floppy disk, CD, or USB drive. Meanwhile viruses can spread to other computers by infecting files on a network file system or a file system that is accessed by another computer. Viruses are sometimes confused with computer worms and Trojan horses. A worm can spread itself to other computers without needing to be transferred as part of a host, and a Trojan horse is a file that appears harmless. Worms and Trojans may cause harm to either a computer system's hosted data, functional performance, or networking throughput, when executed. In general, a worm does not actually harm either the system's hardware or software, while at least in theory, a Trojan's payload may be capable of almost any type of harm if executed. Some can't be seen when the program is not running, but as soon as the infected code is run, the Trojan horse kicks in. That is why it is so hard for people to find viruses and other malware themselves and why they have to use anti-spyware programs and registry processors.

Most personal computers are now connected to the Internet and to local area networks, facilitating the spread of malicious code. Today's viruses may also take advantage of network services such as the World Wide Web, e-mail, Instant Messaging and file sharing systems to spread, blurring the line between viruses and worms. Furthermore, some sources use an alternative terminology in which a virus is any form of self-replicating malware.

Some malware is programmed to damage the computer by damaging programs, deleting files, or reformatting the hard disk. Other malware programs are not designed to do any damage, but simply replicate themselves and perhaps make their presence known by presenting text, video, or audio messages. Even these less sinister malware programs can create problems for the computer user. They typically take up computer memory used by legitimate programs. As a result, they often cause erratic behavior and can result in system crashes. In addition, much malware is bug-ridden, and these bugs may lead to system crashes and data loss.

Read more...

Internet access

Common methods of home access include dial-up, landline broadband (over coaxial cable, fiber optic or copper wires), Wi-Fi, satellite and 3G technology cell phones.

Public places to use the Internet include libraries and Internet cafes, where computers with Internet connections are available. There are also Internet access points in many public places such as airport halls and coffee shops, in some cases just for brief use while standing. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels now also have public terminals, though these are usually fee-based. These terminals are widely accessed for various usage like ticket booking, bank deposit, online payment etc. Wi-Fi provides wireless access to computer networks, and therefore can do so to the Internet itself. Hotspots providing such access include Wi-Fi cafes, where would-be users need to bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A hotspot need not be limited to a confined location. A whole campus or park, or even an entire city can be enabled. Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services covering large city areas are in place in London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. The Internet can then be accessed from such places as a park bench.[11]

Apart from Wi-Fi, there have been experiments with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular phone networks, and fixed wireless services.

High-end mobile phones such as smartphones generally come with Internet access through the phone network. Web browsers such as Opera are available on these advanced handsets, which can also run a wide variety of other Internet software. More mobile phones have Internet access than PCs, though this is not as widely used. An Internet access provider and protocol matrix differentiates the methods used to get online.

Read more...

Streaming media

Many existing radio and television broadcasters provide Internet "feeds" of their live audio and video streams (for example, the BBC). They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access on-line media in much the same way as was previously possible only with a television or radio receiver. The range of material is much wider, from pornography to highly specialized, technical webcasts. Podcasting is a variation on this theme, where—usually audio—material is first downloaded in full and then may be played back on a computer or shifted to a digital audio player to be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material on a worldwide basis.

Webcams can be seen as an even lower-budget extension of this phenomenon. While some webcams can give full-frame-rate video, the picture is usually either small or updates slowly. Internet users can watch animals around an African waterhole, ships in the Panama Canal, traffic at a local roundabout or their own premises, live and in real time. Video chat rooms and video conferencing are also popular with many uses being found for personal webcams, with and without two-way sound.

YouTube was founded on February 15, 2005. It is now the leading website for free streaming video with a vast number of users. It uses a flash-based web player which streams video files in the format FLV. Users are able to watch videos without signing up; however, if users do sign up they are able to upload an unlimited amount of videos and they are given their own personal profile. It is currently estimated that there are 64,000,000 videos on YouTube, and it is also currently estimated that 825,000 new videos are uploaded every day.[citation needed]

Read more...

The World Wide Web

Graphic representation of a minute fraction of the WWW, demonstrating hyperlinks
For more details on this topic, see World Wide Web.

Graphic representation of a minute fraction of the WWW, demonstrating hyperlinks

Many people use the terms Internet and World Wide Web (or just the Web) interchangeably, but, as discussed above, the two terms are not synonymous.

The World Wide Web is a huge set of interlinked documents, images and other resources, linked by hyperlinks and URLs. These hyperlinks and URLs allow the web servers and other machines that store originals, and cached copies of, these resources to deliver them as required using HTTP (Hypertext Transfer Protocol). HTTP is only one of the communication protocols used on the Internet.

Web services also use HTTP to allow software systems to communicate in order to share and exchange business logic and data.

Software products that can access the resources of the Web are correctly termed user agents. In normal use, web browsers, such as Internet Explorer, Firefox and Apple Safari, access web pages and allow users to navigate from one to another via hyperlinks. Web documents may contain almost any combination of computer data including graphics, sounds, text, video, multimedia and interactive content including games, office applications and scientific demonstrations.

Through keyword-driven Internet research using search engines like Yahoo! and Google, millions of people worldwide have easy, instant access to a vast and diverse amount of online information. Compared to encyclopedias and traditional libraries, the World Wide Web has enabled a sudden and extreme decentralization of information and data.

Using the Web, it is also easier than ever before for individuals and organisations to publish ideas and information to an extremely large audience. Anyone can find ways to publish a web page, a blog or build a website for very little initial cost. Publishing and maintaining large, professional websites full of attractive, diverse and up-to-date information is still a difficult and expensive proposition, however.

Many individuals and some companies and groups use "web logs" or blogs, which are largely used as easily updatable online diaries. Some commercial organisations encourage staff to fill them with advice on their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result. One example of this practice is Microsoft, whose product developers publish their personal blogs in order to pique the public's interest in their work.

Collections of personal web pages published by large service providers remain popular, and have become increasingly sophisticated. Whereas operations such as Angelfire and GeoCities have existed since the early days of the Web, newer offerings from, for example, Facebook and MySpace currently have large followings. These operations often brand themselves as social network services rather than simply as web page hosts.

Advertising on popular web pages can be lucrative, and e-commerce or the sale of products and services directly via the Web continues to grow.

In the early days, web pages were usually created as sets of complete and isolated HTML text files stored on a web server. More recently, websites are more often created using content management or wiki software with, initially, very little content. Contributors to these systems, who may be paid staff, members of a club or other organisation or members of the public, fill underlying databases with content using editing pages designed for that purpose, while casual visitors view and read this content in its final HTML form. There may or may not be editorial, approval and security syste

Read more...

E-mail

The concept of sending electronic text messages between parties in a way analogous to mailing letters or memos predates the creation of the Internet. Even today it can be important to distinguish between Internet and internal e-mail systems. Internet e-mail may travel and be stored unencrypted on many other networks and machines out of both the sender's and the recipient's control. During this time it is quite possible for the content to be read and even tampered with by third parties, if anyone considers it important enough. Purely internal or intranet mail systems, where the information never leaves the corporate or organization's network, are much more secure, although in any organization there will be IT and other personnel whose job may involve monitoring, and occasionally accessing, the e-mail of other employees not addressed to them.

Read more...

Internet protocols

The My Opera Community server rack. From the top, user file storage (content of files.myopera.com), "bigma" (the master MySQL database server), and two IBM blade centers containing multi-purpose machines (Apache front ends, Apache back ends, slave MySQL database servers, load balancers, file servers, cache servers and sync masters).

The complex communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. While the hardware can often be used to support other software systems, it is the design and the rigorous standardization process of the software architecture that characterizes the Internet.

The responsibility for the architectural design of the Internet software systems has been delegated to the Internet Engineering Task Force (IETF).[9] The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting discussions and final standards are published in Request for Comments (RFCs), freely available on the IETF web site.

The principal methods of networking that enable the Internet are contained in a series of RFCs that constitute the Internet Standards. These standards describe a system known as the Internet Protocol Suite. This is a model architecture that divides methods into a layered system of protocols (RFC 1122, RFC 1123). The layers correspond to the environment or scope in which their services operate. At the top is the space (Application Layer) of the software application, e.g., a web browser application, and just below it is the Transport Layer which connects applications on different hosts via the network (e.g., client-server model). The underlying network consists of two layers: the Internet Layer which enables computers to connect to one-another via intermediate (transit) networks and thus is the layer that establishes internetworking and the Internet, and lastly, at the bottom, is a software layer that provides connectivity between hosts on the same local link (therefor called Link Layer), e.g., a local area network (LAN) or a dial-up connection. This model is also known as the TCP/IP model of networking. While other models have been developed, such as the Open Systems Interconnection (OSI) model, they are not compatible in the details of description, nor implementation.

The most prominent component of the Internet model is the Internet Protocol (IP) which provides addressing systems for computers on the Internet and facilitates the internetworking of networks. IP Version 4 (IPv4) is the initial version used on the first generation of the today's Internet and is still in dominant use. It was designed to address up to ~4.3 billion (109) Internet hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion. A new protocol version, IPv6, was developed which provides vastly larger addressing capabilities and more efficient routing of data traffic. IPv6 is currently in commercial deployment phase around the world.

IPv6 is not interoperable with IPv4. It essentially establishes a "parallel" version of the Internet not accessible with IPv4 software. This means software upgrades are necessary for every networking device that needs to communicate on the IPv6 Internet. Most modern computer operating systems are already converted to operate with both versions of the Internet Protocol. Network infrastructures, however, are still lagging in this development.

Read more...

Security engineering

Jump to: navigation, search

Security engineering is a specialized field of engineering that deals with the development of detailed engineering plans and designs for security features, controls and systems. It is similar to other systems engineering activities in that its primary motivation is to support the delivery of engineering solutions that satisfy pre-defined functional and user requirements, but with the added dimension of preventing misuse and malicious behavior. These constraints and restrictions are often asserted as a security policy.

In one form or another, Security Engineering has existed as an informal field of study for several centuries. For example, the fields of locksmithing and security printing have been around for many years.

Due to recent catastrophic events, most notably 9/11, Security Engineering has quickly become a rapidly growing field. In fact, in a recent report completed in 2006, it was estimated that the global security industry was valued at US$150 billion.[1]

Security engineering involves aspects of social science, psychology (such as designing a system to 'fail well' instead of trying to eliminate all sources of error) and economics, as well as physics, chemistry, mathematics, architecture and landscaping.[1] Some of the techniques used, such as fault tree analysis, are derived from safety engineering.

Other techniques such as cryptography were previously restricted to military applications. One of the pioneers of security engineering as a formal field of study is Ross Anderson.

Read more...

Theoretical computer science

The broader field of theoretical computer science encompasses both the classical theory of computation, and a wide range of other topics that focus on the more abstract, logical and mathematical aspects of computing.

P \rightarrow Q \, \Gamma\vdash x : Int
Mathematical logic Automata theory Number theory Graph theory Type theory Category theory Computational geometry Quantum computing theory

Read more...

Theory of computation

Classically, the study of the theory of computation is focused on answering fundamental questions about what can be computed, and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a computational problem.

The famous "P=NP?" problem, one of the Millennium Prize Problems,[18] is an open problem in the theory of computation.

P = NP ?
Computability theory Computational complexity theory

Read more...

Major achievements

German military used the Enigma machine during World War II for communication they thought to be secret. The large-scale decryption of Enigma traffic at Bletchley Park was an important factor that contributed to Allied victory in WWII.[11Despite its relatively short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society. These include:

* Started the Digital Revolution, which led to the current Information Age and the Internet.[12]
* A formal definition of computation and computability, and proof that there are computationally unsolvable and intractable problems.[13]
* The concept of a programming language, a tool for the precise expression of methodological information at various levels of abstraction.[14]
* In cryptography, breaking the Enigma machine was an important factor contributing to the Allied victory in World War II.[11]
* Scientific computing enabled advanced study of the mind, and mapping the human genome became possible with Human Genome Project.[12] Distributed computing projects such as Folding@home explore protein folding.
* Algorithmic trading has increased the efficiency and liquidity of financial markets by using artificial intelligence, machine learning, and other statistical and numerical techniques on a large scale.[15]

Read more...

Computing

RAM (Random Access Memory)
Computing is usually defined as the activity of using and developing computer technology, computer hardware and software. It is the computer-specific part of information technology. Computer science (or computing science) is the study and the science of the theoretical foundations of information and computation and their implementation and application in computer systems.

Computing Curricula 2005[1] defined computing:

In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. Thus, computing includes designing and building hardware and software systems for a wide range of purposes; processing, structuring, and managing various kinds of information; doing scientific studies using computers; making computer systems behave intelligently; creating and using communications and entertainment media; finding and gathering information relevant to any particular purpose, and so on. The list is virtually endless, and the possibilities are vast.

Read more...

Computer scientist

From Wikipedia, the free encyclopedia

Jump to: navigation, search

A computer scientist is a person who has acquired knowledge of computer science, the study of the theoretical foundations of information and computation and their application in computer systems.

Computer scientists typically work on the design of the software side of computer systems, versus the hardware side which computer engineers mainly focus on, although there is overlap. Computer scientists can work on, and research in, areas such as algorithm development and design, software engineering, information theory, database theory, computational complexity theory, human-computer interaction, computer programming, programming language theory, computer graphics, and computer vision.

Their specific jobs notwithstanding, the term computer scientist should not be used interchangeably with the previous terms. Overall, computer scientists study the theoretical foundations of computing from which the other fields (software engineering, information theory, database theory, computational complexity theory, human-computer interaction, computer programming, programming language theory, computer graphics, and computer vision) derive. As its name implies, computer science is a pure science, not an applied science or applied business field. As an analogy to the medical field, a computer scientist is like the cancer researcher who might study molecular biology or biochemistry in-depth, while an information technology specialist is like the physician who studies those fields at a higher level and focuses on their application to patient care.

Computer scientists can follow more practical applications of their knowledge, doing things such as software development, web development and database programming. Computer scientists can also be found in the field of information technology consulting.

Computer scientists normally get their degree in computer science at an accredited university or institution.

Read more...