A Reference Architecture for Developers: Chapter 5 - Web 2.0 Architectures

by James Governor, Duane Nickull, Dion Hinchcliffe
Web 2.0 Architectures book cover

This excerpt is from Web 2.0 Architectures. This fascinating book puts substance behind Web 2.0. Using several high-profile Web 2.0 companies as examples, authors Duane Nickull, Dion Hinchcliffe, and James Governor have distilled the core patterns of Web 2.0 coupled with an abstract model and reference architecture. The result is a base of knowledge that developers, business people, futurists, and entrepreneurs can understand and use as a source of ideas and inspiration.

buy button

“Everything deep is also simple and can be reproduced simply as long as its reference to the whole truth is maintained. But what matters is not what is witty but what is true.”

--Albert Schweitzer

It’s time to move from Web 2.0 models to a Web 2.0 Reference Architecture, exploring more technical aspects that developers and architects must consider when building applications. In the process, we’ll map the model in Chapter 4, Modeling Web 2.0 to a technology view that facilitates the new patterns of interaction that we cover in Chapter 7, Specific Patterns of Web 2.0.

This Web 2.0 Reference Architecture does not reflect any constraints regarding implementation; it’s merely an artifact that developers, architects, and entrepreneurs can use to help them design and build Web 2.0 applications. For software architects and developers, a layered reference architecture serves to align their technical views regarding various aspects. More importantly, it offers a good starting place for anyone wishing to develop applications based on the topic covered by the reference architecture (in this case, Web 2.0). As with the model in the previous chapter, you should view this reference architecture as a starting point for your technology road maps, not the one true normative architecture for Web 2.0 application development.


We capitalize the term “reference architecture” when referring to the Web 2.0 Reference Architecture and lowercase the term when using it in the general sense.

About Reference Architectures

In general, a reference architecture is a generic and somewhat abstract blueprint-type view of a system that includes the system’s major components, the relationships among them, and the externally visible properties of those components. A reference architecture is not usually designed for a highly specialized set of requirements. Rather, architects tend to use it as a starting point and specialize it for their own requirements.

Models are abstract, and you can’t implement, or have an instance of, an abstract thing. Reference architectures are more concrete. They have aspects that abstract models do not, including cardinality (a measure of the number of elements in a given set), infrastructure, and possibly the concept of spatial-temporal variance (adding time as a concept).

Consider again the example of residential architecture. The domain of residential dwellings has both an implied model and a reference architecture for a class of things called “houses.” The implied model for a house is composed of or aggregated from the following components:

  • A foundation and/or other subfloor structure to connect the house to the underlying environment, whether it is earth or a body of water

  • Floors to stand on

  • Exterior walls to keep out the elements of nature and to provide structural support for the roof

  • A roof to protect the dwelling’s contents and occupants from the elements of nature and to provide privacy

  • Some form of entry and exit (possibly implemented as a doorway)

  • External links to some form of consumable energy (an interface to connect to the electricity grid, a windmill, or some other electricity-generating device)

This model for a house is minimalist and very abstract. It doesn’t detail such things as the type or number of floors, the height of the ceilings, the type of electrical system, and other things that become relevant only when a more concrete architecture (perhaps expressed as a set of blueprints) is made based on a specific set of requirements, such as “a residential dwelling in Denver for a family of five” or “a one-bedroom apartment in Berlin for an elderly person.”

Although this model is very abstract, we can add details to each item in the reference architecture. For example, we can specify that the foundation be built in the shape of a rectangle. Other aspects in the reference architecture become more concrete if we then specialize the model for human inhabitants. For example, the floor must be edged by exterior walls of sufficient height to allow humans to walk through the house without stooping, and interior walls must be constructed to separate the rooms of the house based on their purposes.

A reference architecture expressed as a set of generic blueprints based on the model discussed at the beginning of this section will be insufficient to actually build a modern residential dwelling. It doesn’t contain sufficient detail to serve as a set of builder’s plans. However, an architect can take the reference architecture and specify additional details to create the kind of plan that builders need. For example, the architect can specify the correct energy conduit to account for North American electricity delivery standards, design the interior floor plan based on the requirements of the house’s inhabitants, and so on. In sum, a reference architecture plays an important role as a starting point upon which more specialized instances of a class of thing can be built, with particular purposes and styles addressed as needed.

The Web 2.0 Reference Architecture, therefore, can provide a working framework for users to construct specialized Web 2.0 applications or infrastructures from specific sets of requirements. We’ll explore that architecture next.

The Web 2.0 Reference Architecture

The reference architecture shown in Figure 5.1, “Basic Web 2.0 Reference Architecture diagram” is an evolution of the abstract Web 2.0 model discussed in Chapter 4, Modeling Web 2.0, with more detail added for developers and architects to consider during implementation. Each layer can contain many components, but at the top level of abstraction, they provide a foundation that applies to many different kinds of application.

Figure 5.1. Basic Web 2.0 Reference Architecture diagram

Basic Web 2.0 Reference Architecture diagram

The components of this architecture are:

Resource tier

The bottommost tier is the resource tier, which includes capabilities or backend systems that can support services that will be consumed over the Internet: that is, the data or processing needed for creating a rich user experience. This typically includes files; databases; enterprise resource planning (ERP) and customer relationship management (CRM) systems; directories; and other common applications an enterprise, site, or individual may have within its domain.

Service tier

The service tier connects to the resource tier and packages functionality so that it may be accessed as a service, giving the service provider control over what goes in and out. Within enterprises, the classic examples of this functionality are J2EE application servers deploying SOAP or EJB endpoints. Web developers may be more familiar with PHP, Rails, ASP, and a wide variety of other frameworks for connecting resources to the Web.


Connectivity is the means of reaching a service. For any service to be consumed, it must be visible to and reachable by the service consumer. It must be possible for potential service consumers to understand what the service does in terms of both business and technical consequences. Connectivity is largely handled using standards and protocols such as XML over HTTP, but other formats and protocols are also possible.

Client tier

Client-tier software helps users to consume services and displays graphical views of service calls to users. Examples of client-side implementations include web browsers, Adobe Flash Player, Microsoft Silverlight, Acrobat, iTunes, and many more.

Design, development, and governance tools

This section encompasses the set of tools that enables designers and developers to build web applications. Typically, these tools offer them views into both the client and service tiers. Examples include Adobe Dreamweaver and Apple’s developer tools xCode and DashCode, though there are many integrated development environments (IDEs) out there, and many developers have their own custom sets of tools.

Each of these tiers can contain a wide variety of components. Figure 5.2, “Detailed reference architecture for Web 2.0 application architects and developers (special thanks to Nabeel Al-Sharma, Dan Heick, Marcel Boucher, Laurel Reitman, Kevin Lynch, and Michele Turner for help developing this model)” shows many more possibilities in greater detail. (Tools come in so many forms that it’s not easily broken down.)

Figure 5.2. Detailed reference architecture for Web 2.0 application architects and developers (special thanks to Nabeel Al-Sharma, Dan Heick, Marcel Boucher, Laurel Reitman, Kevin Lynch, and Michele Turner for help developing this model)

Detailed reference architecture for Web 2.0 application architects and developers (special thanks to Nabeel Al-Sharma, Dan Heick, Marcel Boucher, Laurel Reitman, Kevin Lynch, and Michele Turner for help developing this model)

This Web 2.0 Reference Architecture is very general, but it fulfills a purpose similar to that of the residential dwelling reference architecture discussed earlier. It should not be considered “the” sole authoritative Web 2.0 Reference Architecture. It is meant as a reference architecture that decomposes each of the concepts in Figure 5.1, “Basic Web 2.0 Reference Architecture diagram” into more detail. Software architects or businesspeople can use it as a starting point when designing a way to implement a certain set of design or architectural patterns over the Internet. It lets those people ask important questions that will be relevant to their purposes, such as “What type of client application do we need?” or “Where are we going to authenticate users?”

The Web 2.0 Reference Architecture is not tied to any specific technologies or standards nor is it dependent upon them. Architects and entrepreneurs can decide how to implement this reference architecture using standards and technologies specific to their needs. For example, if services need to be reachable by and visible to the largest possible segment of users, they may choose a protocol such as HTTP for its simplicity, its ability to pass through most corporate firewalls, and its widespread adoption. They could also opt for other messaging protocols to meet special requirements, such as Web Services Reliable Exchange (WS-RX) for reliable messaging, Web Services Secure Exchange (WS-SX) for enhanced security and efficiency with security, or BitTorrent for rapid distribution of multimedia content.

One last thing to note is that software implementations may choose to use all, some, or none of the individual components in each of the tiers. Figure 5.2, “Detailed reference architecture for Web 2.0 application architects and developers (special thanks to Nabeel Al-Sharma, Dan Heick, Marcel Boucher, Laurel Reitman, Kevin Lynch, and Michele Turner for help developing this model)” is based on a commonly used set of components to enable the patterns described in Chapter 7, Specific Patterns of Web 2.0, but there are many simpler and more complex components that could be included.

The Resource Tier

This tier contains core functionality and capabilities and can be implemented in many ways depending upon the context. For example, a large enterprise might have an ERP system, an employees directory, a CRM system, and several other systems that can be leveraged and made available as services via the service tier. A smaller example might be an individual cell phone with a simple collection of vCards (electronic business cards), which are also resources and can also be made available as a service to be consumed, perhaps over a Bluetooth connection. Figure 5.3, “Detail view of the resource tier” shows a fairly complex enterprise resource tier.

Figure 5.3. Detail view of the resource tier

Detail view of the resource tier

The resource tier is increasingly being integrated into web applications in order to build rich user experiences. As client-side applications become richer and software rises above the level of any one piece of hardware (or device), making small computational pieces available to the client tier becomes a tangible requirement for many resource owners. This manifests as software that is no longer tied to specific operating systems or even, with large enterprise systems, operating in the cloud.

While these inner boxes are meant only as exemplars of potential resources, we’ll look at each in detail to help you understand the overall architecture and some common capabilities:


Enterprise Information System (EIS) is an abstract moniker for a component common in most IT systems. EISs typically hold various types of data for use by those who run the company. An example might be a short-term storage database or sensors feeding information into a common repository.


Databases are typically used to persist data in a centralized repository designed in compliance with a relational model. Other types include hierarchal databases and native XML databases. Each type of database is tasked with handling centralized persistence of data that may be retrieved later for a variety of purposes. Databases vary in size and complexity based on the amount and nature of the data stored, and may be supplemented by classic filesystems or other data-storage mechanisms.


Directories are lookup mechanisms that persist and maintain records containing information about users. Such records may include additional details for authentication or even business data pertaining to their purpose. Using a centralized directory is a good architectural practice to avoid errors arising from mismatches in the state of any one person’s records. Examples of this are LDAP-based systems and Microsoft’s Active Directory.

ECM repository

Enterprise content management (ECM) repositories are specialized types of EIS and database systems. While they typically use databases for long-term persistence, most ECM systems are free to use multiple data-persistence mechanisms, all managed by a centralized interface. ECM systems are often used for long-term storage and have several common features related to the various tasks and workflows enterprises have with respect to data management.

Message queues

Message queues are ordered lists of messages for inter-component communications within many enterprises. Messages are passed via an asynchronous communications protocol, meaning that the message’s sender and receiver do not need to interact with the message queue at the same time. Typically, messages are used for interprocess communication or inter-thread communication to deal with internal workflows; however, in recent years the advent of web services has allowed enterprises to tie these into their service tiers for inter-enterprise communication. Popular implementations of this functionality include JMS, IBM’s WebSphere MQ (formerly MQSeries), and, more recently, Amazon’s Simple Queue Service (SQS).

Legacy systems

The last component is a catchall generally used to denote anything that has existed through one or more IT revolutions. While some view legacy systems as outdated or slated-to-be-replaced systems, the truth is that most systems become legacies as a result of working well and reliably over a long period of time. Examples of legacy systems might include mainframes and systems such as IBM’s CICS.

The Service Tier

At the core of the service tier (shown in Figure 5.4, “Detail view of the service tier”) is a service container, where service invocation requests are handled and routed to the capabilities that will fulfill the requests, as well as routing the responses or other events, as required. (Java programmers will be familiar with the servlet container model, for example.)

Figure 5.4. Detail view of the service tier

Detail view of the service tier

The service container is a component that will assume most of the runtime duties necessary to control a service invocation request and orchestrate the current state, data validation, workflow, security, authentication, and other core functions required to fulfill service requests. In this context, a service container is an instance of a class that can coordinate and orchestrate service invocation requests until the inevitable conclusion, whether successful or not.

Figure 5.4, “Detail view of the service tier” illustrates several elements within the service tier:

Service invocation layer

The service invocation layer is where listeners are plugged in to capture events that may trigger services to perform certain actions. It may utilize several types of adapters or event listeners to allow invocation of services; for example, communication endpoints may provide the triggers (typically SOAP or XML over HTTP), or the services may be invoked temporally via timeout events, or even via internal events such as a change in the state of some piece of data within the resource tier. While many articles and papers focus on incoming messages arriving via SOAP endpoints, other forms of invocation are often used.

Service container

Once a service is invoked, a container instance is spawned to carry out the service invocation request until it is either successfully concluded or hits a fatal error. Service containers may be either short- or long-lived instances, have permissions to access many common and specific services (core services) within the service tier, and can communicate with external capabilities via the service provider interface.

Business rules and workflow

All service invocation requests are subject to internal workflow constraints and business rules. A service request being fulfilled may have to navigate a certain path in order to reach its final state. Examples include forms being routed to the correct personnel for approval or parameters for a request for data being authenticated to determine if the request meets the business’s acceptance criteria. Some commercial service bus products also offer a workflow engine and designer as part of the service tier.


A registry is a central component that keeps track of services, perhaps even in multiple versions. The registry may also track secondary artifacts such as XML schemas, references to service capabilities, and other key resources that are used by the service tier. A repository is a component used to persist resources or data needed during the execution of short- or long-running service invocation requests. The registry and the repository can be used both during design time, to orchestrate amongst multiple services, and at runtime, to handle dynamic computational tasks such as load balancing and resource-instance spawning and to provide a place where governance and monitoring tools can use data to give insight into the state of the entire service tier. Data stored in a repository is often referenced from the registry.

Service provider interface (SPI)

Since the service tier makes existing capabilities available to be consumed as services, an SPI is required to connect to the resource tier. The resource tier in this case is a generic descriptor for a virtual collection of capabilities. The SPI might be required to communicate with several different types of applications, including CRM systems, databases, ECM systems, plain old Java objects (POJOs), or any other resources that provide the capabilities to power a service.

Service containers are the physical manifestation of abstract services and provide the implementation of the internal service interfaces. This can require substantial coordination. If, for example, a service request proves to be outside the service policy constraints, the service container might be required to generate a fault and possibly roll back several systems to account for the failed invocation request, as well as notifying the service invoker. The service container depicted in Figure 5.4, “Detail view of the service tier” might also use the registry/repository to help in service fulfillment.

Additionally, the core service tier ties into backend capabilities via the SPI. Implementers of Web 2.0–type applications will have to consider the following integration questions while designing their systems, some of which will affect how the SPI is configured and what protocols it must handle:

  • What systems or capabilities will I want to connect with?

  • What set of core services will I need to provide as part of my infrastructure? (Some examples include authentication, encryption, and email notifications.)

  • What business rules will I have to enforce during the service invocation process, and how will I describe and monitor them?

  • How will services be invoked? What invocation patterns will transpire within my infrastructure?

There are other complications as well. Service interaction patterns may vary from a simple stateless request/response pair to a longer-running subscribe/push. Other service patterns may involve interpretation of conditional or contextual requests. Many different patterns can be used to traverse the service tier, and the semantics associated with those patterns often reside with the consumer’s particular “view.” For example, if a person is requesting a service invocation to retrieve data, he may define that service pattern as a data service. From a purely architectural perspective, however, this is not the case, because every service has a data component to it (for more specifics on the relationship between a service and data, see the discussion of the SOA pattern in Chapter 7, Specific Patterns of Web 2.0). Another person with a more specific view might try to define the service as a financial data service. This is neither right nor wrong for the consumer, as it likely helps her to understand the real-world effects of the service invocation. However, the granularity and purpose of the service pattern is in the eye of the beholder.

Service oregistries are central to most SOAs. At runtime they act as points of reference to correlate service requests to concrete actions, in much the same way the Windows operating system registry correlates events to actions. A service registry has metadata entries for all artifacts within the SOA that are used at both runtime and design time. Items inside a service registry may include service description artifacts (WSDL), service policy descriptions, XML schemas used by various services, artifacts representing different versions of services, governance and security artifacts (certificates, audit trails), and much more. During the design phase, business process designers may use the registry to link together calls to several services to create a workflow or business process.

Service registries help enterprises answer the following questions:

  • How many processes and workflows does my IT system fulfill?

  • Which services are used in those processes and workflows?

  • What XML schemas or other metadata constraints are used for the services within my enterprise?

  • Who is using the services I have within my enterprise?

  • Do I have multiple services doing the same function?

  • What access control policies do I have on my services?

  • Where can users of my services be authenticated?

  • What policies do I have that are common to multiple services?

  • What backend systems do my services talk to in order to fulfill invocation requests?

This is only a starter set of questions; you may come up with many more. The SOA registry/repository is a powerhouse mechanism to help you address such questions.

It’s also worth discussing the service invocation layer in Figure 5.4, “Detail view of the service tier” in greater detail. The service invocation layer is where service invocation requests are passed to the core service container. The service invocation layer can hook into messaging endpoints (SOAP nodes, Representational State Transfer interfaces, HTTP sockets, JMS queues, and so on), but service invocations may also be based on events such as a timeouts, system failures and subsequent powerups, or other events that can be trapped. Client software development kits (SDKs), customer libraries, or other human or application actor interactions can also initiate invocation requests. In short, remember that several potential types of invocations are inherent in SOA design and to fulfill the patterns of Web 2.0 flexibility should be maintained by using the service invocation layer as a sort of “bus” to kick off service invocations. Realizing that patterns other than request/response via SOAP (what many people consider to be SOA) may be employed to invoke services will result in a much more flexible architecture.

The Client Application Tier

The client application tier of the Web 2.0 Reference Architecture, shown in Figure 5.5, “Detail view of the client application tier”, contains several functional components that are managed by the controller, the core application master logic and processing component. Every client application has some form of top-level control. The concept used here is in alignment with the controller concept in the Model-View-Controller (MVC) pattern.

Figure 5.5. Detail view of the client application tier

Detail view of the client application tier

Web 2.0 clients often have several runtime environments. Each runtime is contained and facilitated by a virtual machine. Thus, while the runtime environments are launched and managed by a single controller, they remain somewhat autonomous with respect to executing scripts or bytecode to control certain aspects of an application. The virtual machine itself is a specialized type of controller, but it’s controlled by a master controller that’s responsible for launching and monitoring virtual machines and runtime environments as they’re required.

To clarify the relationships, let’s break out each component in the client application tier diagram, starting at the top:


The controller contains the master logic that runs all aspects of the client tier. If the client is a browser, for example, the core browser logic is responsible for making a number of decisions with respect to security and enforcement, launching virtual machines (such as Flash Player or the Java Virtual Machine), rendering tasks pertaining to media, managing communication services, and managing the state of any data or variables.

Data/state management

Any data used or mutated by the client tier may need to be held in multiple states to allow rollback to a previous state or for other auditing purposes. The state of any applications running on the client tier may also need to be managed. Companies like Google and Adobe are getting very aggressive in developing advanced functionality with respect to data and state management to allow online and offline experiences to blend (i.e., using Google Gears and the Adobe Integrated Runtime, or AIR). Mozilla’s Firefox browser now supports connections to SQLite databases as well.

Security container/model

A security model expresses how components are constrained to prevent malicious code from performing harmful actions on the client tier. The security container is the physical manifestation of the model that prevents the runtime enactment of those malicious scenarios. Almost every client-tier application includes some kind of security model (unrestricted access to local resources is in fact a security model), and each must have a corresponding set of mechanisms to enforce its security policies. A security sandbox in a browser, for example, prevents website authors from injecting into a web page any code that might execute in a malicious manner on the user’s machine.

Virtual machines

Virtual machines (VMs) are plug-ins that can emulate a specific runtime environment for various client-side technologies. Virtual machines were the foundation for Java, including its early move to the Web as “applets,” but the busiest VM these days is likely the ActionScript Virtual Machine, the core runtime behind Adobe’s Flash. VMs are often built in alignment with the applications’ security models to constrain runtime behavior from leaving the emulated environment.

Rendering and media

Management of the media and rendering processes is required to present a graphical interface to users (assuming they are humans). The client tier handles all aspects of rendering. For example, in a browser, HTML and CSS might first be parsed and an internal representation made in memory, which can be subsequently used to build a “view” of the web page.


With every client-tier application, communication services are required. These are often constrained in accordance with the security model and orchestrated by the controller based on a number of criteria (online/offline synchronization, small AJAX-like calls back to the server, etc.). The communications aspect of the client tier usually incorporates various stacks and protocols so that it can speak HTTP and HTTPS, supports the XMLHTTPRequest object, and more.

The client-side rendering engine handles all the “view” behavior for GUIs, as well as media integration. Rendering engines also pass information to the virtual machines and are controlled by the master controller. The data/state management mechanisms in the client tier control transformations, state synchronizations, transitions, and state change event generation during the object life cycle.

On client systems, allowing access to local resources—even read-only privileges—represents a primary security risk. A sandbox philosophy typically confines the runtime environment and keeps it separate from the local system. Access to local system resources is usually denied unless explicitly granted for browser-based applications, unlike desktop applications, which enjoy a greater deal of interaction. There’s also a new breed of hybrid smart client applications that exist outside the browser, without being full-featured applications. Examples include widgets, gadgets, and Adobe’s AIR applications. These applications use a hybrid security model that must be carefully thought out, as smart client and composite applications that mash up content from more than one site can experience runtime problems with mismatched security privileges between domains.

The communication services manage all communications, including between the client and server, with the host environment, and with resources in other tiers. Together with data/state management services, the communication services must be aware of the connection status and understand when to locally cache data and where to go to synchronize data states once interrupted connections are reestablished. Several of the patterns discussed in Chapter 7, Specific Patterns of Web 2.0, such as the Synchronized Web and Mashup patterns, require a careful orchestration of these resources.

Architectural Models That Span Tiers

The SOA and MVC architectural models are key pillars of Web 2.0. The services tier and the client application tier must be built using similar design principles so that they can provide a platform for interaction. Resource tier and client application tier designers tend to abide by the core tenets and axioms of the Reference Model for SOA and apply application design principles such as MVC. The MVC paradigm encourages design of applications in such a way that data sets can be repurposed for multiple views or targets on the edge, as it separates the core data from other bytes concerned with logic or views.

Model-View-Controller (MVC)

MVC, documented in detail at http://en.wikipedia.org/wiki/Model-view-controller, is a paradigm for separating application logic from the data and presentation components. MVC existed long before Web 2.0, but it can be usefully applied in many Web 2.0 applications and architectures. It’s a deployment pattern whereby application code is separated into three distinct areas: those concerned with the core data of the application (the Model), those concerned with the interfaces or graphical aspects of the application (the View), and those concerned with the core logic (the Controller). MVC works across multiple tiers: it allows application reskinning without affecting the control or data components on the client, and it enables Cloud Computing, where virtualization results in the controller and model concerns being distributed in opaque regions while still fulfilling their required roles.

Web 1.0 applications tended to mix together the Model, View, and Controller. Early applications were built around requests for specific pages, with every page a program of its own and data linked primarily by SQL calls to shared databases. On the client, HTML scrambled behavior with view and processing. Although HTML used different elements for these functions, the result was still a mixed-up markup language.

Web application development has evolved, however. On the client, HTML still provides a basic framework, but it has become more of a container for parts that support the different aspects of MVC more cleanly. While the HTML itself still often contains much information, XML and JSON offer data-only formats that clients and servers can use to communicate their data more precisely, according to models created early in development. (This makes it easier for other users or applications to repurpose one aspect of an application, such as using the data—i.e., the Model—in a mashup.) Controllers have also advanced. Script languages such as JavaScript and ActionScript are great for client-side processing. The View components can be realized with a multitude of technologies; however, the most common are HTML/XHTML, thanks to their use of references and layout to load and position graphics in a variety of formats, including JPEG, GIF, PNG, SVG, and Flash. On the server side, new frameworks supporting MVC approaches have helped create more consistent applications that integrate more easily with a variety of clients and servers.

Service-Oriented Architecture (SOA)

The other major advance, SOA, provides a more flexible foundation than the client/server models of the past. Client/server architecture did not fully account for the variety of standards and extensible protocols that services under the control of different ownership domains could use. That wasn’t a problem in the early days of the Internet, but as more protocols and ownership domains appeared, the architectural pattern of SOA became necessary to facilitate proper architectural practices among the designers of the applications connected to the Internet.

In the context of this book, SOA refers to an architectural paradigm (style) for software architecture, in much the same way that REST is an architectural style. SOA does not mean “web services,” nor would every implementation of web services be automatically considered an SOA. An example of an application built using web services standards that is not an SOA would be an application that used another component over SOAP, but in which the SOAP component’s life cycle was tied directly to the consumers. This concept is known as tight binding, and it doesn’t allow repurposing of services.

This definition of SOA might not be the same definition you had in mind when you picked up this book. We encourage you to decouple SOA from any standards or technologies and ask some hard questions like, “If SOA is architecture (as the name implies), how is it expressed as architecture?” and “What is unique about SOA that is not inherent in other architectural models?” Also consider SOA apart from specific implementation technologies and ask, “If X is SOA, what is not SOA?” (replacing X with your definition of SOA).

Enough of the core patterns we’ll explore in Chapter 7, Specific Patterns of Web 2.0 on SOA as an architectural pattern that SOA itself is presented as the first pattern. The Mashup pattern relies on services, the Software as a Service pattern consumes computational functionality as services, and the Rich User Experience pattern often employs SOA on the backend to retrieve contextually specific data to make the user’s experience much richer.

Consistent Object and Event Models

Our Web 2.0 Reference Architecture leaves some items, such as consistent object and event models, outside of the tiers themselves. These items relate to several tiers of the reference model. For example, if a developer wishes to develop an application that listens for changes to the state of an object and catches them as events, the entire architecture must have a consistent model for objects and events, or at least a model for making them consistent eventually.[66] These models may vary slightly if the developers are using several technologies within their projects, and it is important for architects to understand and be able to account for the differences. Consider an Adobe Flex frontend (client-tier) application that is coupled with a .NET backend (server tier). If the Flex client needs to capture events, the model of how events are generated and detected and how messages are dispatched has to be consistent throughout the entire application.

Some Web 2.0 patterns, such as the Mashup and Synchronized Web patterns (described in Chapter 7, Specific Patterns of Web 2.0), demand a consistent model for both objects and events. Those building composite applications might have to deal with events occurring on objects residing in several remote domains and different environments. SOA makes this somewhat easier by providing a clearly defined interface to the objects; however, the high-level models need to be aligned.

Over the past decade, this has become easier. The W3C recognized the need for consistent approaches to objects a long time ago and has developed several recommendations on this subject. The Document Object Model (DOM) is the base technology used to address many XML and HTML pages, and the XML Infoset provides a further layer of abstraction. Even Adobe’s Portable Document Format (PDF), Microsoft’s Office format, and the Organization for Advancement of Structured Information Systems’s Open Document Format (OASIS ODF) largely correspond to the same conceptual model for a document object. Most programming and scripting languages have also evolved in a similar manner to have a roughly consistent view of events and objects.

What’s new is the way in which some patterns use the events and objects across both the client and the server. Whereas in Web 1.0 many events were localized to either the client or the server, architectural paradigms have evolved whereby events on one might be caught and used by the other. Also new in Web 2.0 implementations is the ability to capture events on objects persisting on multiple systems, act on these events in another application, and then aggregate the results and syndicate them to the client. AJAX applications can support this model across systems with thousands of clients. The Google personalized home page,[67] a prime example of this approach, syndicates data to each client based on its template. Small updates to the resulting page are made when new events are communicated to the AJAX framework behind the page view, and changes to the model result in updates to the view.

We’re almost ready to launch into the patterns, but before we discuss them in detail, let’s take a brief detour to explore the metamodel or template that all the patterns in Chapter 7, Specific Patterns of Web 2.0 use.

[59] For more information on Flex, see http://flex.org.

[60] For more information on XAML, visit http://www.xaml.net.

[65] Even more ironically since Eclipse is one of IDEA’s main competitors, I currently use the Eclipse-based Flex Builder with RDT as my IDE.

Web 2.0 Architectures book cover

This excerpt is from Web 2.0 Architectures. This fascinating book puts substance behind Web 2.0. Using several high-profile Web 2.0 companies as examples, authors Duane Nickull, Dion Hinchcliffe, and James Governor have distilled the core patterns of Web 2.0 coupled with an abstract model and reference architecture. The result is a base of knowledge that developers, business people, futurists, and entrepreneurs can understand and use as a source of ideas and inspiration.

buy button