Ethernet is simply a way to connect computers. It defines physical media and certain communications protocols that allow various types of computers and computing devices to connect to the network. Other pieces of code contained within the message sent by one computer or device and received by others are called protocols. Each has a specific function. See the NetWorld department in this issue on page 51 for a description of the network layers and some of the protocols used in manufacturing.
Many pioneers of computing had a dream of connecting computers into a vast network on which scientists and researchers could share findings and stay in touch with projects and each other. This early work on networking led to ARPANet for military computing researchers and expanded to what we know today as the Internet. For additional insight, “The Dream Machine, J.C.R. Licklider and the Revolution That Made Computing Personal,” by M. Mitchell Waldrop (Penguin Books, 2002), is a fascinating account of the development of personal computers, networking and the Internet.
The ubiquitous transmission control protocol/internet protocol (TCP/IP) enabled Ethernet to attain the power and status that it enjoys today. The protocols allow computers and computing devices to communicate (TCP) and find each other on the network (IP address). One thing that makes this networking system more popular than any other is that it is application-protocol neutral.
The first protocols were designed to enable text-based communication over the newly created Internet. These protocols allowed transfer of files from one computer to another (File Transfer Protocol, FTP), sending messages that became known as e-mail (Simple Mail Transfer Protocol, SMTP) and network administration (Simple Network Management Protocol, SNMP).
The development of the World Wide Web allowed documents written using a markup language, called HyperText Markup Language (HTML), to be sent via the Internet over TCP/IP using HyperText Transfer Protocol (HTTP). Not quite 10 years old, this invention greatly increased the amount of information that could be shared.
Web pages written in HTML, even with the ability to load some dynamic information, very quickly went from cutting-edge novelty to being too limited for the fast-paced computing environment. A markup language called Extensible Markup Language (XML) and its associated standards were developed to enable exchange of data rather than complete Web pages.
New demands require a new communication protocol. Simple Object Access Protocol (SOAP) specifies exactly how to encode an HTTP header and an XML file so that a program in one computer can call a program in another computer and pass it information. It also specifies how the called program can return a response.
An advantage of SOAP is that program calls are much more likely to get through firewall servers that screen out requests other than those for known applications. Since HTTP requests are usually allowed through firewalls, programs using SOAP to communicate can be sure that they can reach programs anywhere.
More information about Web standards and protocols can be found at the Web site of the World Wide Web Consortium (www.w3c.org). This organization, founded by Web inventor Tim Berners-Lee, oversees development and approval of the various standards.
What will the Web of the future look like? An organization of researchers and companies is working on just this topic. PlanetLab (www.planet-lab.org) volunteers are developing a network of servers using the concept of slicing applications. Imagine a typical desktop computer with applications (such as Microsoft Word) and files all in one physical location. What if pieces, or slices, of all those files were dispersed around the Internet, yet were available for reassembly whenever called upon by the owner? Users would need only a terminal with access to the Internet to go to work.