Web Services
part of Perl for the Web
The next generation of business-to-business (B2B) interfaces will come with an improved set of standards and more automated development. As this book is being written, standards such as Simple Object Access Protocol (SOAP), Web Services Description Language (WSDL), and Universal Description, Discovery, and Integration (UDDI) promise to change the way automated transactions are performed. These particular standards might not be the ones that form the backbone of the next-generation Web, but the solution they represent definitely will. There's too much information on the Web to program interfaces by hand and create custom interfaces for each partnership. Automation for Web processes is needed, and the model that will make it happen is called Web services.
In fact, the Web services model extends past business interfaces to encompass all types of automated network interactions. Within the existing "Web browsing" model, Web sites are seen as distinct locations visited directly by users with the intent to browse through available documents. The assumption is that a user takes a single path through a single site; thus, no interactions between multiple paths or sites are necessary. It's also assumed that a site contains all the information needed to display the proper documents to the user. For instance, the Travelocity site might retain seat preference information for a particular user, but that information had to be entered by the user at that same site. The user can't indicate that the Travelocity Web site should visit the Expedia site to get the required information.
With a Web services model, all programs are given access to the documents and interfaces provided by a site, and both the parameters and the results are encoded in a way that programs can understand without human interaction. This means that both client and server programs can get the information they need from Web services, even if a server program is accessing a Web service to process a request to its own Web service. For example, the Travelocity Web site might indicate to a browser that it needs information about seat preference. The user could indicate that the information is accessible through the Expedia site's preferences Web service. Better yet, a personal travel agent program on the user's computer could contact the Travelocity Web service directly to book a flight and then provide information from an escrow agency's Web service as input. With automated services such as these, sites can be linked into chains to perform useful tasks with as little user input as possible. In fact, the process can be taken over by automated systems like in-dash Global Positioning System (GPS) computers, which have the computingpower and network connections necessary to access simplified Web services.
A Traveler's Story
It's late, and you're tired. You've been driving all day and most of the night, and you're starting to lose track of exactly where you are. The GPS claims you're still heading in the right direction, but you don't know this stretch of road from any other stretch in this godforsaken backwater.
Trippy has been a busy boy. He's been monitoring the Driver's current position and speed, running up ahead to check road conditions, and sniffing around to see if there are any factoids about the countryside that the Driver is currently traversing. There isn't much on the Net about this area, and the few things he's been able to find come up empty after parsing them against the Driver's preferences. Oh well, there's always more to find.
Just a few hours more. You thought you'd make good time on this trip, but you haven't been driving for more than 14 hours and already you're tired. You even stopped for dinner, so make that 13 hours on the road. Still tired, though.
It's ten o'clock and your eyes are drooping. The road's getting blurry and you almost missed that last curve. You really should get some rest. "All right," you say, giving in at last. "Trippy, where's the nearest hotel with a vacancy?"
The Driver gave a command, and Trippy takes a few milliseconds to figure out what he might possibly mean. ALICE has a pretty narrow idea of the possibilities, so Trippy figures that the Driver would like a room according to his Travelocity profile in a hotel that resides somewhere within a few minutes' driving distance from the next half-hour stretch of his route. "I'll check," he responds to the Driver.
Trippy checks a directory site and the usual group of travel portals, compiles a list of hotels within the zip codes the Driver is currently approaching, and sorts them based on their distance from the route. He queries the first twenty hotel sites to check for vacancies and prices and rules out those that don't have available rooms in the Driver's price range. A few sites don't respond to Trippy at all, so he rules them out after a few seconds as well.
Trippy takes this new list and sorts the rooms by their ratings as listed on the Fodor's, Let's Go, and Lonely Planet sites. Unrated rooms drop off the list. Trippy creates a knowledge tree of the top five, plots the potential course changes, and updates the driver.
Three seconds after you asked, the GPS tells you it found something. "There's a Travelodge nearby with rooms available for US$78 per night," it says in its Stephen Hawking voice. That sounds good, but you could swear you saw a sign for $69.95 a couple exits back. Or was that a couple hours ago?
You ask, "Is there anything cheaper?"
The smarmy GPS immediately fires back, "There's a U-Pay Moto Tel not far from here that has rooms available for US$56 per night. Its reputation is poor."
Fine, whatever. "Reserve the Travelodge," you say. Anything that gets you out of this car and into a clean bed is just fine.
Trippy whirls into action again. The route is changed to reflect the new stop. He queries the Travelocity site to make sure it can reserve a room on short notice. It can, so he makes the reservation using the Driver's cryptocard identification and his default Travelocity profile.
You pull up to the Travelodge after an eternal fifteen minutes of drooping eyelids and loud music. "You are zero miles from your next stop," the GPS chimes happily. Duh.
The night desk clerk looks about as awake as you are, but he gives you a room key as soon as you show ID. You are about to go up to your room when your stomach grumbles. It's been a while. You go back to the lobby, past the night desk, and out the door. You can't help but feel a little dependent as you open the car door and start the engine.
"Anything to eat around here?"
Automation: The Holy Grail
A technology such as the Trippy GPS computer sounds like something out of the Jetsons, but all the information necessary to create the exchange is already present on today's World Wide Web (WWW). Travelocity and other travel sites keep profiles of customer preferences. Sites such as Yahoo! can search for a business by its location. Many hotel companies have sites for information and reservations, and even small bed-and-breakfast hotels are developing their own. In addition, all the client technologiesvoice interaction, GPS navigation, and in-dash computingare well-defined and in production use. A system could be put together using today's tools, such as IBM's ViaVoice, the ALICE natural-language interface, and standard car-computing hardware.
The only part of this picture that's truly lacking is the automation necessary to bring the information from the network into the client technologies. Clients to access the Web are plentiful, but most Web information is in a form that's suitable only for visual browsing by a human using one of a narrow set of browsers. Rendering most Web pages on an alternative browser is a difficult task, as is extracting relevant data from the Web programmatically. Trying to feed information from the current Travelocity or Travelodge sites into a voice computer is hard enoughgetting that system to automatically sort through possibilities and present the best option would be nearly impossible. The problem, of course, lies with HTML.
The Problem with HTML
HTML provides a good interface for human-computer interaction. It's a language originally designed to facilitate communication through documents that are easy to read and easy to compose. Since then, HTML has become the de facto interface description standard on the WWW. HTML-formatted interfaces have been developed for business applications, games, directories, and a host of other information systems.
Automating interactions with a Web site would seem like a trivial matter. These interfaces are available through standard Internet protocols, so any computer with TCP/IP networking would have the capability to connect. A multitude of Perl modules are available to help connect to sites, send requests, and retrieve results. As a result, it is trivial to write a Perl program that simulates a user when connecting to Web interfaces.
However, HTML as it's used by Web interfaces is a terrible way to encode data for automated use. Its structure is loosely defined and geared more toward the display of arbitrary text information than the storage of data. HTML is designed to be a formatting and display language, and it is modeled more on word processing document formats than data encoding formats. An HTML document can contain all the information relevant to a task, but it's likely to be stored in a format that is more readable by humans than programs. All a typical browser can do is display the text and graphics specified in the HTML file, so automated programs have no standard way of extracting the information that a Web interface might return.
Screen Scrapers
Extracting data encoded in HTML pages is a difficult and time-consuming task, even for a person sitting in front of a visual browser. Anyone who has tried to copy data from an HTML table or source code listing into another program has run across the problemdata displayed on screen isn't necessarily being represented the same way in the HTML file. Usually, the user ends up having to copy and paste individual parts of the browser display to arrange them properly in another application.
An automated program that performs the equivalent of these steps on an HTML file is called a screen scraper because it "scrapes" the important information out of the file based on how it would render to a visual browser on screen. Screen scrapers normally have to make educated guesses about where in the HTML file a particular piece of data might be. They often are coded by hand to specify the usual location of data in relation to surrounding text.
In Perl, screen scrapers usually are implemented using regular expressions. An HTML file in this situation is seen as simply a large text value to be searched for patterns. Any HTML formatting that might be present usually is ignored. A more structured interface such as HTML::Parser is rarely used because most Web pages don't contain enough valid structure to be parsed meaningfully by HTML::Parser, and the structure of the resulting document might vary widely depending on tags that otherwise are ignored by visual browsers. If HTML tags are taken into account, it's usually to provide an example of the text surrounding a value sought by the screen scraper. For instance, a screen scraper might want the values from the following HTML text:
Listing 16.
<p><b>Eric Hedstrom</b> has many fine desert panoramas at his
site, <a href="http://jacinto.yi.org/">Jacinto</a>.
The name of the site's owner would be searched for in an area bracketed by <b> and </b> tags, while the URL and name of the site would be found between the href attribute and the trailing </a> tag, separated by the characters ">. Writing a regular expression to gather these values requires the assumption that the basic formatting of the site will not change. Unfortunately, the format of a site normally changes almost as rapidly as screen scrapers can be developed. Thus, these information gatherers present only a temporary solution.
Beyond Screen Scrapers
Screen scrapers are only a stopgap designed to provide compatibility when no other means of gathering the same information is available. No business or organization could possibly build an information infrastructure on such an unstable and labor-intensive interface. In addition, the overhead of processing excess formatting to extract needed information can be extreme. In some cases, a program might need to search through HTML files hundreds of times larger than the information it ends up extracting. In addition, full-text searches of this nature tend to require the least efficient search methods.
Historically, the only alternatives to screen scrapers were customized interfaces that reduced the uncertainty of processing transmissions. Companies such as CyberCash that interacted with many e-commerce sites on a variety of platforms had to create custom software for each platform to interact with their servers. These systems were reliable after they were developed, but new development was difficult enough to discourage rapid adoption, and a different proprietary protocol had to be used for each service provider. In an effort to provide standards, formats such as the Electronic Data Interchange (EDI) were developed. They used a fixed and standardized file format. However, these formats tended to be tailored to screen-scraper-like processing, and new development was still difficult and labor-intensive.
With Extensible Markup Language (XML) and the wide adoption of Internet protocols, the idea of combining HTML-like formatting simplicity with robust Internet security and stock Web software began to emerge. XML provides the potential for data storage formats that are legible to humans and easily readable by programs. In addition, XML formats can be transported over any number of existing Internet networking protocols, which enables programmers to use all the security features and server software already available for those protocols. As XML-based automation standards such as XML for Remote Process Calls (XML-RPC) and SOAP take hold, the goal of truly automating network interactions gets even closer.
SOAP and Web Services
SOAP provides a good alternative to the cluttered, unstructured data returned by most Web sites. Aspects of the protocol specification that are unlikely to change from one interface to the next are rigidly defined, but the protocol still enables a wide range of data constructs to be passed in a SOAP envelope. With the SOAP protocol and existing Web architecture, it's possible to create interfaces to Web information systems that require little custom programming. Because these interfaces also can be described using the same framework and no outside negotiation, they are the closest yet to being truly automated.
These automated interfaces to networked systems are called Web services. The Web services model combines the easy implementations of Web servers with the structured XML data of the SOAP protocol. The idea is to provide a Web service interface for every aspect of a business or organization and then publish complete descriptions of the interface to directories of similar Web services. Clients wishing to use a Web service then can browse the directory, find Web services that match their needs, and implement the services automatically. Thus, custom interface programming and specific partnership agreements give way to a Web-wide infrastructure the same way that custom networks gave way to the Internet.
Existing Web Services
Many Web services already are available, and sites that collect them into coherent directories have started to appear. XMethods (http://www.xmethods.com) and XMLToday (http://www.xmltoday.com) are sites geared toward SOAP enthusiasts, so many of the Web services listed are of an experimental nature. These sites host the service description for each Web service, but they also offer details on how to use the Web service in a client application. Some services even include sample programs implementing the service description as a clientmany written in Perl using SOAP::Lite. Adding these services to an existing program is likely to get even easier as description standards become more widely used. See the "WSDL and UDDI" section later in this chapter for more information.
In the future, many Internet-savvy businesses are likely to implement SOAP interfaces to their service offerings. Early candidates include shipping companies, such as FedEx and UPS, who probably will provide their shipping services through SOAP as well as their existing custom interfaces. Document consolidation services also are likely to be early adopters of the Web services model. Examples include job posting boards such as Monster.com and Dice.com that process thousands of job postings, resume listings, and match requests daily. A set of Web services for common functionsposting a resume or getting details about an available positionwould be of use both to users of the site and other sites with similar data. An additional standard could be developed even on top of SOAP to specifically suit the types of Web services these sites would offer.
Exposing Systems as Web Services
The core of a Web service is the set of SOAP interfaces it implements. Each interface is defined in terms of an object to which it's attached, a method that it implements, and the parameters that it accepts and emits. For instance, a SOAP interface to log into a job posting site might be implemented as a method called getLoginToken attached to an object called Jobs. The getLoginToken method might accept parameters called username and password and return a parameter called token. The method would be one of many implemented as part of the Jobs object, which together would make up the full Web service.
In Perl, SOAP::Lite is the preferred tool for implementing SOAP interfaces. SOAP:Lite enables each SOAP method to be defined as a subroutine with either named or unnamed parameters used as the arguments for the method. SOAP objects are represented as Perl modules, so the process for creating a full Web service would follow the standard procedures for creating modules in Perl. For instance, the Jobs object mentioned could be created using SOAP::Lite and a module named Jobs.pm, with subroutines named getLoginToken and so on. SOAP::Lite handles most thorny data serialization problems, so object references, session variables, and authorization tokens all can be implemented in a Perlish fashion.
When deciding how to implement a Web service for an existing HTML-based project, consider the work that already has been performed. If existing Web interfaces can be wrapped in a set of subroutines, considerable effort can be avoided. Form variables can be serialized easily into method parameters, and method names can be derived from existing page names, if necessary. Most of the work then would lie in encoding the output in a SOAP-friendly format rather than in an HTML-formatted page.
After creation, Web services can be reused also as the back end of a standard Web site. Object methods created for use with SOAP::Lite can be used with local Perl programs as well, so it's possible to encapsulate all the logic of a site in the object modules and use them with both. This reduces the job of the Web site designers to simply translating form requests into method calls and formatting the results using HTML templates.
Clustering Methods Usefully
Of course, Web services shouldn't follow the same conventions as Perl object methods simply because the interfaces can be connected automatically. Many object methods are defined for use by programmers with intimate knowledge of the interfaces they define. Simply exposing these methods directly as Web services won't make for service descriptions that are easy to understand.
Instead, Web services should be designed to be as human-readable and understandable as possible. Methods should be given descriptive titles that reflect their intended functions, and all parameters should be given names that make sense within the service description. For instance, an object method implemented on the server as getHeads with the parameters string, int, and bool probably won't illuminate the purpose of the function very well. However, a SOAP method definition such as the following would make it clearer:
Listing 16.
<getNewsHeadlinesRequest>
<siteName xsi:type="xsd:string" />
<numberOfHeadlines xsi:type="xsd:int" />
<provideSynopsis xsi:type="xsd:boolean" />
</getNewsHeadlinesRequest>
In addition, Web service methods should be clustered into SOAP objects that reflect a single overall purpose, however those methods are implemented on the server. In practice, this means that functions usually separated in Perl codesuch as database access, session handlers, and search algorithmsshouldn't be presented as individual SOAP objects. Instead, methods should be clustered into objects based on their purpose within the Web service, such as retrieving shipping information or placing a customer service request.
WSDL and UDDI
The automation of Web interfaces is a noble goal, but it won't save much effort if the description of such services is left to traditional means. The current way to describe Web services involves pages of text devoted to values programmers must use when writing a special-purpose program to access the service. Because the goal of these services is complete automation, it would be wise to use the same standardsXML and network protocolsto define the interface descriptions as well as the interfaces. Thus, an automated Web services client also can automate the way it accesses services.
WSDL and UDDI standards were developed to do just this. Between the two, they promise to automate the description and discovery of Web services.
The WSDL Specification
WSDL is an XML language used to describe Web services, no matter how they are implemented. In current practice, a WSDL file describes a single SOAP object and all the methods it makes available. WSDL files consist of a series of declarations, each of which describes an aspect of the Web service to be defined. Sections declared include the endpoint location and the name of the object described in the specification. Additional sections cover a list of object methods defined, the encoding type used for requests and responses, and the protocol (for example, SOAP) used for communications.
A major part of any WSDL service specification is the structure of the request and response message accepted and emitted by the server for any given object method. This structure might be expressed in any notation that is itself valid XML. Initially, this means that messages can be described in terms of a basic array of parameters, where each parameter is of a known data type. As the standard progresses and gains better implementations, the XML Schema language also is used to describe more complex structures with custom data types. The end result is the capability to transmit full XML document types such as cXML or Extensible HTML (XHTML) through SOAP and describe the transactions automatically using WSDL and XML Schema.
WSDL files can be created by hand for each Web service, or they can be generated automatically for services that share a common set of attributes. The basic structure of a WSDL file can be specified as a template, which then is filled in with the appropriate values from the SOAP server and defined Web services. These values might be difficult to ascertain for systems that use SOAP::Lite or similar automated abstraction layers, but these same layers end up being the best way to implement WSDL automation anyway. Using templates to generate these files enables Web service descriptions to be defined consistently, even when the WSDL specification changes. Changes to the format from one revision of the specification to another simply require a change in the structure of the template because the underlying data to be described isn't likely to change much.
UDDI Directories
After a WSDL file is available for a Web service, it can be published to a UDDI directory. UDDI directories are Yahoo!-style listings of businesses and the services they offer. The services listed won't be limited to Web services using the SOAP protocol, but a major use of UDDI directories will be to store Web service descriptions and provide a standard way to find an appropriate Web service for a particular application. UDDI directories are initially being developed and deployed by Microsoft and IBM, the major backers of the UDDI standard, but additional directories are slated to go online as the standard gets more firmly defined.
A UDDI directory can be searched by business type, business name, or the type of service being offered. Initial UDDI directories can be accessed either through a dynamic HTML interface or an automated SOAP interface. The SOAP interface actually is being touted as the primary interface to UDDI data, in keeping with the Web services model. The SOAP interface also will enable UDDI directories to be kept synchronized with each other, thus forming a virtual directory that can be accessed through any individual implementation with consistent results. Eventually, it's likely that modules such as SOAP::Lite will use the same SOAP interfaces to publish WSDL files to UDDI directories automatically as Web services are being deployed.
Sidebar: The Changing Face of Standards
UDDI, WSDL, and SOAP are just the latest round of standards created to answer the same set of nagging problems. However, they aren't likely to be the last. Over the course of writing a SOAP interface to a generic XML server, I encountered no less than four iterations of the UDDI specification with various names and three iterations of WSDL under its parade of acronyms. SOAP itself is no exception. If it keeps up with the early pace of the HTML standard, the standard likely is to "officially" change faster than most client implementations can keep up. Such a disparity opens up the threat of fragmentation and implementation-specific features.
I hold one ray of hope, though, based on the fact that XML itself has been relatively resistant to change since the 1.0 standard was put in place. Because XML is the basis for the other standards, it provides a fixed set of boundaries within which all future standards can be expected to stay. Plus, because XML is regular enough to be programmable at the base level, much of the work that goes into supporting the derivative standards will apply no matter how they change. Who knows if any of the Web services standards will last very long, but the idea behind Web services is here to stay.
Summary
A new idea in B2B integration is encapsulated in the Web services model, which uses XML and Internet protocols to provide truly automated interfaces. Existing HTML interfaces have the advantage of wide use, but the work involved in automating them is prohibitive and prone to instability. SOAP provides a better interface language for implementing automated Web services because it's cross-platform and easy to implement in Perl. Exposing interfaces as Web services involves more than a simple translation from Perl modules to SOAP objects; however, much of the work that already has been done to create Web interfaces can be reused to add Web services. After services are available, they can be described using WSDL files and posted to UDDI directories, where they can be discovered and implemented without any custom programming at all.