XML and Web Services In The News - 13 October 2006
Provided by OASIS |
Edited by Robin Cover
This issue of XML Daily Newslink is sponsored by Sun Microsystems
HEADLINES:
Open Archives Initiative Announces Object Reuse and Exchange (ORE)
Staff, OAI Announcement
The Open Archives Initiative (OAI), with the generous support of the
Andrew W. Mellon Foundation, announces a new effort as part of its
mission to develop and promote interoperability standards that aim
to facilitate the efficient dissemination of content. Object Reuse
and Exchange (ORE) will develop specifications that allow distributed
repositories to exchange information about their constituent digital
objects. These specifications will include approaches for representing
digital objects and repository services that facilitate access and
ingest of these representations. The specifications will enable a new
generation of cross-repository services that leverage the intrinsic
value of digital objects beyond the borders of hosting repositories.
The goals of ORE are inspired by advances in scholarly communication
and the growth of scholarly material that is available in scholarly
repositories including institutional repositories, discipline-oriented
repositories, dataset warehouses, and online journal repositories.
This growth is significant by itself. However, its real importance
lies in the potential for these distributed repositories and their
contained objects to act as the foundation of a new digitally-based
scholarly communication framework. Such a framework would permit fluid
reuse, refactoring, and aggregation of scholarly digital objects and
their constituent parts — including text, images, data, and software.
This framework would include new forms of citation, allow the creation
of virtual collections of objects regardless of their location, and
facilitate new workflows that add value to scholarly objects by
distributed registration, certification, peer review, and preservation
services.
See also: Open Archives Initiative Protocol (OAI-PMH)
A Case for Peering of Content Delivery Networks
Rajkumar Buyya, et al (RMIT eds), IEEE Distributed Systems Online
Content Delivery Networks, which first evolved in 1998, replicate
content over several mirrored Web servers, strategically placed at
various locations to deal with flash crowds and to enhance response
time. A CDN improves network performance by maximizing bandwidth,
improving accessibility, and maintaining correctness through content
replication. Here, we present a model for an open, scalable, and
service-oriented architecture (SOA)-based system. This system helps
to create open Content and Service Delivery Networks (CSDNs) that
scale well and can share resources with other CSDNs through cooperation
and coordination, thus overcoming the island CDN problem. Our proposed
system ensures the quality of services based on SLA negotiation and
solves the problem of the logical separation between CDNs and CSNs.
We propose a Virtual Organization (VO) model for forming CSDNs that
share Web servers not only within their own networks but also with
other CSDNs. To encourage sustained resource sharing and peering
arrangements between different CDN providers at a global level, we
propose using market-based models in resource allocation and
management inspired from their successful utilization in the
management of autonomous resources, especially in global Grids. A
service registry enables CDN providers to register and publish their
resources and service details. An SLA-negotiator service and allocator
module uses this service registry to discover CDN providers and
negotiate QoS parameters and resource allocation to maximize
cooperative CSDNs' potential. A policy repository stores the policies
that the administrators generate. These policies are a set of rules
to administer, manage, and control access to VO resources. They
provide a way to consistently manage the components deploying complex
technologies.
See also: the GRIDS laboratory web site
Deconstructing .NET 3.0
Matthew David, Informit.com
The .NET 3.0 is somewhat different from the 1.x and 2.0 .NET Framework.
The first two frameworks focused on allowing many different languages to
communicate with a common set of libraries translated through the
Common Language Runtime (CLR). Introduced with .NET 1.0 and enhanced
with .NET 2.0, the CLR works on a relatively simple concept: A common
runtime model executes code for any system running the .NET Framework.
The .NET 3.0 Framework isn't improving upon existing technologies but
rather introducing four new foundation technologies: (1) Windows
Presentation Foundation (WPF); (2) Windows Communication Foundation
(WCF); (3) Windows Workflow Foundation (WWF); (4) Windows CardSpace
(WCS). Windows Presentation Foundation (WPF) is arguably the most
well-known of the four new foundation class sets. One interesting
aspect of WPF is the XML standard programming language called XAML
(pronounced "Zammel") that controls the layout of objects. This
language is causing the comparisons with Flash. On the surface, both
seem similar, but the WPF and Flash differ significantly. Flash is a
mature, controlled, sandboxed framework that's independent of the
operating system. WPF allows you to integrate with the operating
system and other .NET Framework technologies. Flash and WPF are
essentially two very different technologies that will serve different
markets, with marginal crossover. The core purpose of the Windows
Communication Foundation (WCF) is to allow programs to talk to other
programs on the same computer or network, or across the Internet. The
WCF programming model unifies web services, .NET Remoting, distributed
transactions, and message queues into a single service-oriented
programming model for distributed computing. WCF is designed in
accordance with service-oriented architecture principles to support
distributed computing, in which services are used by consumers,
clients can consume multiple services, and services can be consumed
by multiple clients. Services typically have a WSDL interface that
any WCF client can use to consume the service, irrespective of the
platform on which the service is hosted. WCF implements many advanced
web services standards, such as WS-Addressing, WS-Reliability, and
WS-Security. .NET 3.0 is similar to the previous frameworks, in that
it will run on multiple operating systems. At launch, .NET 3.0 will
run on Windows XP, Windows 2003/R2, and Windows Vista.
Europe Extends Open-Source Resource
Richard Thurston, CNET News.com
The European Commission is launching a resource for public sector
organizations to share open-source code and applications. The uncatchily
named Open Source Observatory and Repository (OSOR) aims to improve
the return on investment of open-source projects and to make
applications more interoperable. The resource is aimed purely at the
public sector, and the Commission believes it will be successful
because of the large number of similar projects being conducted by local
and national government organizations across the European Union. OSOR
is an extension to the Commission's existing Open Source Observatory Web
portal. The main extension is the creation of a repository of source
and object code and information on the use of applications, licenses,
and contract material. "The new OSOR should become the preferred
cooperation tool to speed up software pooling among Member States,"
said Karel De Vriendt, head of the EU's e-government services unit and
one of the driving forces behind the project. OSOR will be run under
contract to the Commission by Unisys, the Maastricht Economic Research
Institute on Innovation and Technology, Belgium-based consultants GOPA
Cartermill and Spain's Rey Juan Carlos University.
The Portlet Repository Protocol
Roy Russo, JBoss Blog
Along with the Portal team at Sun, [JBoss is] proud to announce the
start of a new protocol design for communicating with portlet
repositories. The idea for a standard repository protocol came after
discussions with Sun over the interoperability of disparate portlet
repositories with many portal vendors (as you know, they also have a
portlet repository), and how we could offer a standard medium of
communication between all players involved. So the idea was to create
a Web-Service-based API that would allow any portal vendor to browse
repositories, view individual portlet meta-data, and be able to
download/update portlets from any repository... much like developers
are accustomed to browsing/installing/updating plugins in their
favorite IDEs. It is an open standard, so that anyone may take part
and voice their opinions in its future development. What this means
to portal administrators, is that one day they will be able to
install/update/demo portlets from a myriad of repositories from within
their portal itself. It also means the portlet world will get a lot
smaller, in view, as where those portlets are coming from is
transparent to the user. "The Portlet Repository Protocol (PRP) project
seeks to define a common Web Service API used to communicate with
portlet repositories. It will also establish the format and meta-data
to be included when defining a specific portlet within a repository.
This is a free and open standards project, that any portlet repository
may implement, and any portal vendor may leverage, as well High level
requirements include: (1) The protocol will allow listing of all
portlet applications available in the repository. (2) The protocol will
allow obtaining information about a particular portlet application
based on unique identifier. A unique identifier will be defined in
such a way that it is unique across multiple vendor repositories. (3)
The protocol will allow searching the repository based on tags and other
metadata information (e.g Portal Vendor ) (4) The protocol will provide
a method to query for newer versions of an already deployed portlet
application. (5) The protocol will be based on XML Web Service
(SOAP/HTTP).
See also: the PRP web site
OASIS OpenDocument Metadata
Michael Brauer, GullFOSS Blog
The OASIS OpenDocument Technical Committee in its meeting this Monday
[2006-10-09] has approved the OpenDocument Metadata Use Cases and
Requirements document, which is the first deliverable of the
OpenDocument Metadata Subcommittee. The document includes representative
use cases and requirements for enhancing OpenDocument's metadata
support, and it additionally lists a couple of design goals and
requirements. It will be the basis for the future work of the metadata
subcommittee, and therefore provides an outlook in which direction
OpenDocument moves regarding metadata. And because OpenDocument is
OpenOffice.org's native and default file format, I'm sure it also
provides an outlook in which direction OpenOffice.org may move. I
recommend reading the document to everyone who is interested in
metadata. [The enhanced metadata proposal consists of four parts: (1)
Metadata Model and Syntax: OpenDocument supports a subset of the
RDF model and XML syntax to provide a metadata framework with robust
and predictable extensibility, and easily processed with standard RDF
and XML tools. (2) Associating Metadata with Document Content. (3)
Metadata Extension Modules. (4) Identifying Document Fragments and
Metadata Collections: defining a convention to identify document
content and metadata graphs by IRIs, which allows them to be
referenced externally, and for additional metadata to be added to the
named graphs.]
See also: the Metadata Proposal Wiki
Major XSLT 2.0 Features and the 1.0 Shortcomings They Address
David Marston and Joanne Tong, IBM developerWorks
XSLT 2.0 introduces numerous new features, and some are specifically
designed to address XSLT 1.0 shortcomings. Explore some of the most
highly desirable features: grouping, Implicit Document Nodes, user-
defined functions, date-time manipulation, Schema-awareness, and
numerous output enhancements. In this collection of articles, you'll
get a high level overview and an in-depth look at XSLT 2.0 from the
point of view of an XSLT 1.0 user who wants to fix old problems, learn
new techniques, and discover what to look out for. Examples derived
from common applications and practical suggestions are provided if you
wish to upgrade. This first article describes the major features in
XSLT 2.0 most frequently requested, leaving XPath and the function
library for later. XSLT 1.0 was mainly defined in two W3C documents:
XSLT and XPath. The XSLT 2.0 version is designed to align with XQuery,
and the XSLT family now includes six specifications in its core: XSLT,
XPath, Functions and Operators (F&O), Data Model (XDM), Formal Semantics,
and Serialization. Like XSLT 1.0, it is built upon several other
foundation specs (XML, namespaces, and so on.), and it draws XML Schema
into its orbit. These were preceded by a requirements document that
stated objectives for the new features and justified the work on a 2.0
specification. Schema-awareness is an optional feature in XSLT 2.0.
If you find a processor that supports it, you might find it worthwhile
to use it. The schema-aware feature is mainly used for error checking.
It validates your input and output against an XML schema, and it allows
you to refer to source nodes in the stylesheet based on their Schema
type. This article is based on the Candidate Recommendation from
June, 2006 and describes the enhancements in XSLT 2.0 that are most
likely to convince you to upgrade to the new version.
See also: XSLT 2.0
Introducing OpenLaszlo
Sreekumar Parameswaran Pillai, XML.com
OpenLaszlo programs are written in XML and JavaScript and transparently
compiled to Flash. It is "write once, run everywhere." An OpenLaszlo
application developed on one machine will run on all leading web
browsers and on all leading desktop operating systems. Applications
made on OpenLaszlo can also run in solo mode as a desktop client. This
tutorial helps you get started on OpenLaszlo, which is an open source
platform for creating zero-install web applications with the user-
interface capabilities of desktop client software. The article refers
only to open source tools to set up a development environment for
Laszlo. Every step is narrated and illustrated with screenshots. The
tutorial also helps you to set up the environment with "IDE for Laszlo,"
an open source plugin available for Eclipse that offers very convenient
features but uses Apache Ant for actual deployment. The goal is to
reduce the building and deployment time while developing with OpenLaszlo.
The recommendation to use Ant for deployment is due to its simplicity
and quick execution time. In comparison, the IDE for Laszlo plugin is
slow. Besides, Ant can provide additional features such as automated
testing, reporting, web application deployment, etc. From the web site:
"OpenLaszlo is an open source platform for creating zero-install web
applications with the user interface capabilities of desktop client
software. OpenLaszlo programs are written in XML and JavaScript and
transparently compiled to Flash and soon DHTML. The OpenLaszlo APIs
provide animation, layout, data binding, server communication, and
declarative UI. An OpenLaszlo application can be as short as a single
source file, or factored into multiple files that define reusable
classes and libraries."
See also: the OpenLaszlo project web site
How to Study and Learn SAML
Jeff Hodges (Neustar), Draft SAML Whitepaper
This brief whitepaper provides a functional introduction to the SAMLv2
specifications tailored to protocol designer and developer's
perspectives. First a conceptual introduction is presented, next
suggestions on how to study and learn SAML are given, and then more
detailed aspects are discussed. SAML defines an XML-based framework
for crafting "security assertions", and exchanging them between
entities. In the course of creating, or relying upon such assertions,
SAML system entities may use SAML protocols, or other protocols, to
convey an assertion itself, or to communicate about the "subject" of
an assertion. Thus one can employ SAML to make statements such as:
"Alice has these profile attributes and her domain's certificate is
available over there, and I'm making this statement, and here's who
I am." Then one can cause such an assertion to be conveyed to some
party who can then rely on it in some fashion for some purpose, for
example input it into a local policy evaluation gating access to some
resource. Such applications of SAML are done in a particular "context
of use". A particular context of use could be, for example, deciding
whether to accept and act upon a SIP-based invitation to initiate a
communication session. The specification of just how SAML is employed
in any given context of use is known as a "SAML profile". The
specification of how SAML assertions and/or protocol messages are
conveyed in, or over, another protocol is known as a "SAML Binding".
Typically, a SAML profile specifies the SAML bindings that may be used
in its context. Both SAML profiles and SAML bindings in turn reference
other SAML specifications, especially the SAML Assertions and Protocols,
aka "SAML Core", specification. Note that the SAML Assertions and
Protocols specification, the SAML Core, is conceptually "abstract". It
defines the bits and pieces that make up SAML Assertions, and their
nominal semantics, but does not define how to actually put them to use
in any particular context. That, as we've said, is left to SAML
Profiles, of which there can be many.
See also: failover cache
XML.org is an OASIS Information Channel
sponsored by BEA Systems, Inc., IBM Corporation, Innodata Isogen, SAP AG and Sun
Microsystems, Inc.
Use http://www.oasis-open.org/mlmanage
to unsubscribe or change an email address. See http://xml.org/xml/news_market.shtml
for the list archives. |