XML and Web Services In The News - 17 November 2006
Provided by OASIS |
Edited by Robin Cover
This issue of XML Daily Newslink is sponsored by Sun Microsystems, Inc.
HEADLINES:
A New Scheme for Data Sharing
John Moore, Government Health IT
Semantic interoperability isn't a phrase that rolls off the tongue, but
health informatics experts believe the concept has the potential to
significantly improve communication among health information systems.
Recent moves to commercialize semantic technologies have increased
interest in the topic. At least two broad-based projects specifically
target health care: the World Wide Web Consortium's (W3C) Semantic Web
Health Care and Life Sciences Interest Group and the federally oriented
Health Information Technology Ontology Project (HITOP). Both were
launched in 2005. Those groups have spent the past few months raising
awareness about semantic interoperability and its health care
implications. Les Westberg, senior software architect and engineer at
Northrop Grumman IT, pointed to the Armed Forces Health Longitudinal
Technology Application (AHLTA) as an example. The military's electronic
health records system was based on an ontology and information model
from the beginning. AHLTA is now adding Web services along with other
middleware technologies that were available long before Web services
arrived. The Military Health System is also pursuing a Terminology
Service Bureau. Westberg called the project the ultimate ontology.
Scheduled for delivery late last month, the bureau will help integrate
AHLTA with other health systems. The World Wide Web Consortium's
Semantic Web Health Care and Life Sciences Interest Group has four
active task forces exploring various facets of semantic interoperability.
The group's Bio RDF task force, for example, plans to build a
demonstration system that will collect biochemical and neurological
data using the Semantic Web as a data combination tool, said Ivan Herman,
who leads W3C's Semantic Web activity. The demo will focus on Parkinson's
disease. The Bio-Ontology task force will work with the Bio RDF group
on the Parkinson's project, Herman said. The Bio-Ontology group's
charter is to develop best practices for creating and using ontologies
in life sciences and health care.
Extensible Markup Language (XML) Configuration Access Protocol (XCAP) Co-operation with HTTP Extensions for Distributed Authoring (WEBDAV)
Jari Urpalainen, IETF Internet Draft
The Extensible Markup Language (XML) Configuration Access Protocol
(XCAP) was designed to store XML documents on an HTTP server. Also
patching of XML document components, i.e., XML elements and attributes
can be achieved with basic HTTP PUT and DELETE methods. Thus XML
documents contain usually many XCAP resources and access to them is
achieved by using a node selector in the path segment of the request
URI. The document tree structure is also described by the core XCAP
protocol. HTTP Extensions for Distributed Authoring (WebDAV) provides
many useful HTTP extensions for web content authoring including many
other MIME types than just XML documents. The extension set includes
properties, collections, locks and namespace operations of WebDAV
resources. With WebDAV access control protocol access to shared resources
can easily be allowed or denied. This document describes conventions
for XCAP servers utilizing these WebDAV authoring extensions. The aim
is to use existing specifications with compatibility in mind, an
existing XCAP client can still use resources of the server which
complies with the rules described in this document.
See also: the IETF SIMPLE WG
Rich Web Application Backplane
Mark Birbeck, John Boyer, Al Gilman (et al., eds), W3C Note
W3C's Hypertext Coordination Group has released a "Rich Web Application
Backplane" document as a Coordination Group Note. The authors maintain
that submission, data models, model-view binding and behavior, and web
components can provide a common infrastructure for multiple markup
formats. Web 2.0 combines a desire for increasing interactivity and
responsiveness in Web applications, together with a desire to drive
an exponentially growing source of applications through component-based
(e.g. 'mash-up') rather than monolithic design methods. Interactivity
and responsiveness result largely from asynchronous programming methods
where the traditional page replacement design is replaced by enhanced
client-side processing and incremental server interactions. Server
interactions may either refresh data or presentation controls, without
the disruption in end-user experience caused by complete page
replacement. Component-based designs have resulted from the increasing
trend of web authors to expose APIs within their client-side code,
allowing for downstream (i.e. after page-generation) extension of those
components with value-added data or presentation elements — not
anticipated or controlled by the original page author. The Web Apps
APIs WG has in its charter extensions to XMLHTTP, the backbone of AJAX
applications. XForms has an asynchronous submission element which
similarly is used to incrementally refresh content between its data
model and the server. The "Backplane" paper proposes that there are a
number of such common building blocks underlying web application design
that cut across boundaries of working groups, boundaries of namespaces
(XHTML, XForms, SVG, VoiceXML, etc), and that cut across boundaries of
procedural (e.g. scripting) vs. declarative programming styles. By
working toward a common definition of those building blocks, which
we call a 'rich web application backplane' we can support a more
pluggable and composable infrastructure for web developers, without
constraining their choice of namespace or programming technology, and
hence accelerate the ecosystem of web 2.0 developers.
WS-Transactions Update
Eric Newcomer, Weblog
Since WS-TX was chartered about a year ago, we have been working to
refine the three submitted V1.0 specs and progress them to achieve a
status as the adopted standards of an independent consortium.
Ultimately, standards are all about adoption - many specs have been
written that go nowhere, while other technologies have become standard
without ever going through the committee process. For the
WS-Transactions specifications I'd say we are getting what you'd call
sufficient critical mass: IBM, Microsoft, Red Hat (JBoss), and IONA
all currently provide implementations. In addition we also have regular
participation from Sun, Hitachi, Oracle, Nortel, Fujitsu, Adobe, Tibco,
Choreology, and individuals (John Harby) — all of whom attended this
face to face either in person or via phone. This is pretty good
considering the work is nearly completed. Because we are getting
close to the end, we have been spending more time on "fit and finish"
issues and polishing up the text than we did when we first started.
One of the major issues for the recent face to face was getting the
specs consistent with RFC 2119 (yes, there is a standard for using
certain words in standards ;-). On the WS-TX TC home page you can
also find links to home pages for each of the three specifications:
WS-AT, WS-C, WS-BA. WS-C V1.1 and WS-AT V1.1 entered their 60-day
public review mid September, and so we also had the chance at the F2F
to discuss and resolve issues submitted during the public review
process, which is basically the final cycle. WS-BA 1.1 will be going
into the public review phase soon, again based on the work we did
during the F2F. Once the specifications have completed their public
reviews the next step will be to submit them to become OASIS standards.
If they are accepted, the work of the TC is essentially completed.
See also: the OASIS WS-TX TC
Dynamic Webpages with JSON
Ajay Raina and John Jimenez, JavaWorld Magazine
Making asynchronous HTTP requests from Webpages is an effective
technique in bringing seemingly static pages to life. Asynchronous
JavaScript and XML (AJAX) has become a popular technology in creating
richer and more dynamic Web clients, and is often used to incorporate
desktop features in the browser. However, the usual XMLHttpRequest-
based AJAX clients suffer from the limitation of only being able to
communicate to the server from where they are downloaded. This becomes
problematic for deployment environments that span multiple domains.
In addition, developers end up writing browser-specific code since
each of the main browsers implements this XML request object
differently. In this article, we describe an approach based on
JavaScript Object Notation (JSON) that, in the spirit of Web 2.0,
makes it easy to build mashup applications without the cross-domain
and cross-browser limitations of AJAX. We discuss why the JSON-based
approach is elegant in adding asynchronous features to Webpages and
also mention some server-side utilities that can be used to generate
JSON data. The simplicity of the JSON data format makes this approach
elegant. Additionally, it is possible to easily modify existing Java
EE services or applications to generate JSON data.
PTC Announces Arbortext Content Manager and Arbortext 5.3
Staff, EContent Magazine
PTC, a product development company, has announced the availability
of Arbortext Content Manager to expand the PTC Dynamic Publishing
System. PTC's Dynamic Publishing System combines text authoring,
graphics authoring, content management and configuration management,
automated publishing, and graphics visualization. It is a system
explicitly focused on optimizing the publishing process for
organizations in the pharmaceutical, financial services, government,
transportation, and process manufacturing industries. Highlights of
the PTC Dynamic Publishing System include: support for all phases of
the publishing process; support for component-based XML authoring
for collaborative document creation; broad support for technical
illustrations, either created from scratch or based on CAD data;
content management that bursts documents into reusable components;
configuration and workflow management capabilities; dynamic document
assembly to publish to all media automatically, including print and
electronic, from a single source; and integration with Microsoft
Office. PTC also announced the newest version of its dynamic
publishing software, Arbortext 5.3: this version extends PTC's
commitment to the Darwin Information Typing Architecture (DITA).
DITA is an XML-based architecture for authoring, producing, and
delivering technical information. Arbortext has supported DITA
since 2004 with user interfaces, style-sheets, and document type
definitions (DTDs) for standard DITA information types (Task, Concept,
and Reference), and support for DITA's "specialization" capability,
"conref" inclusion mechanism, and custom table models.
See also: DITA references
IODEF/RID over SOAP
Kathleen Moriarty and Brian H. Trammell, IETF Internet Draft
The Incident Object Description Exchange Format (IODEF) describes an
XML document format for the purpose of exchanging data between CSIRTS
or those responsible for security incident handling for network
providers (NPs). The defined document format provides an easy way for
CSIRTS to exchange data in a way which can be easily parsed. In order
for the IODEF documents to be shared between entities, a uniform
method for transport is necessary. SOAP will provide a layer of
abstraction and enable the use of multiple transport protocol bindings.
IODEF documents and extensions will be contained in an XML Real-time
Inter-network Defense (RID) envelope inside the body of a SOAP message.
For some message types, the IODEF document or RID document may stand
alone in the body of a SOAP message. The RIDPolicy class of RID (e.g.,
policy information that may affect message routing) will appear in
the SOAP message header. This draft outlines the SOAP wrapper for all
IODEF documents and extensions to facilitate an interoperable and
secure communication of documents. The SOAP wrapper allows for
flexibility in the selection of a transport protocol. The transport
protocols will be provided through existing standards and SOAP binding,
such as SOAP over HTTP/TLS and SOAP over BEEP.
U.S. Technology Czar Says More IT Workers Needed
Stan Gibson, eWEEK
Following a time of mass avoidance in the aftermath of the dot-com
bust, the U.S. IT work force is facing a shortage of people,
according to the Commerce Department's technology czar. "The IT
work force is not skilled enough and almost never can be skilled
enough," said Robert Cresanti, undersecretary of commerce for
technology: "There are not enough engineers with the appropriate
skill sets." Cresanti said U.S. colleges and universities are not
enrolling enough engineering students, resulting in a dearth of
information technology professionals. In addition to boosting
engineering enrollment, he urged opening the gates to more foreign
workers, including H-1B holders. "Without H-1B visas, we would have
economic dislocation." The third quarter exhibits a sharp drop in IT
worker confidence. Speeding up processing of student visas is also
needed, he said. Many foreign students are unable to study in the
United States because tight visa policies in the wake of 9/11 are
preventing them from doing so. "It's not just India, but other
countries like Russia and Israel." As far as future technologies are
concerned, Cresanti said nanotechnology is the most important.
Although health concerns about nanotechnology need to be addressed,
he said: "We cannot afford not to be leaders in nanotechnology.
It's the way everything will be made."
oNVDL: Open Source NVDL Implementation Based on Jing
George C. Bina, XML-DEV Announcement
Developers at OxygenXML.com announced the availability of oNVDL, an
open source NVDL implementation based on Jing. Namespace-based
Validation Dispatching Language (NVDL) is an ISO Standard: ISO/IEC
19757-4, being Part 4 of Information technology - Document Schema
Definition Languages (DSDL). ISO/IEC 19757 defines a set of Document
Schema Definition Languages (DSDL) that can be used to specify one or
more validation processes performed against Extensible Markup Language
(XML) documents. A number of validation technologies are standardized
in DSDL to complement those already available as standards or from
industry. An NVDL script controls the dispatching of elements or
attributes in a given XML document to different validators, depending
on the namespaces of the elements or attributes. An NVDL script also
specifies which schemas are used by these validators. These schemas
may be written in any schema languages, including those specified by
ISO/IEC 19757. oNVDL allows to invoke from NVDL scripts validation
against XML Schema, Relax NG and Schematron. The oNVDL distribution
includes binaries, source code and documentation and it is available
online. oNVDL was developed for adding NVDL support in oXygen XML
editor. oXygen XML Editor 8.0 uses oNVDL to validate NVDL scripts and
to validate documents against NVDL scripts. It includes also a sample
showing guided editing and validation of an XHTML document with embedded
XForms markup. [Note: Rick Jelliffe announced that the ISO Standards
for Schematron, RELAX NG, and NVDL are now available free from ISO.
"ISO is now hosting from their site free PDF versions of many ISO
standards, notably Schematron, RELAX NG (full and compact syntax) and
NVDL. These are available from the Publicly Available Standards
section. Other free standards available include for C, C# and CLI,
FORTRAN, Z, JPEG200, CGM, and many concerned with telephony and
removable media. ASN.1 is on the way, too."]
See also: Rick's note
Major Vendors Put Open Source Into Turmoil
Marc Ferranti, InfoWorld
Major software vendors are shaking up the open-source market. Microsoft
Corp.'s deal with Novell Inc. and Oracle Corp.'s move to support Red
Hat Linux have sent IT investors scurrying to figure out what it all
means. Underneath the current angst, though, there are signs of a
bedrock belief in Linux and open source. IDC reported that while global
Windows server sales were $17.7 billion for 2005, compared to Linux's
$5.7 billion, Linux growth was 20.8 percent in the fourth quarter,
compared to 4.7 percent revenue growth for Windows. Government muscle
is also behind open-source software. Municipalities around Europe, for
example, are switching to open source. Sun Microsystems Inc.'s
announcement this week that it would open-source Java under the GPL
is a vote for the validity of the license among developers working on
next-generation products. Meanwhile, open-source companies with
innovative business models are starting to go public. Open-source
security vendor Sourcefire Inc. announced it would go public, within
24 hours of the moment Oracle made its announcement to support Red Hat.
Open-source application tools developer Trolltech ASA went public on
the Oslo Stock Exchange in July [2006].
Playing for Keeps
Daniel E. Geer, ACM Queue
Inflection points come at you without warning and quickly recede out of
reach. We may be nearing one now. If so, we are now about to play for
keeps, and 'we' doesn't mean just us security geeks. If anything, it's
because we security geeks have not worked the necessary miracles
already that an inflection point seems to be approaching at high velocity.
As we already knew when Fred Brooks wrote The Mythical Man-Month in 1975,
an increase in features with each new release of a software product
means that as the product grows, the rate of bug finding falls at first
but then begins to rise. Brooks estimates that 20 to 50 percent of all
fixes for known bugs introduce unknown bugs. Therefore, rationally
speaking, there comes a time at which many bugs should be left
permanently in place and fully documented instead of fixed. Excepting
the unlikely, obscure, and special case of security flaws that are
intentionally introduced into products, security flaws are merely a
subset of all unintentional flaws and will thus also rise with system
complexity. The snowballing complexity our software industry calls
progress is generating ever-more subtle flaws. It cannot do otherwise.
It will not do otherwise. This is physics, not human failure. Sure,
per unit volume of code, it may be getting better. But the amount of
code you can run per unit of time or for X dollars is growing at
geometric rates. Therefore, for constant risk the goodness of our
software has to be growing at geometric rates just to stay even. The
only alternative to the problem of complexity vs. security is to make
computing not be so general purpose, to get the complexity out by
creating appliances instead. Corporate America is trying hard to do
this: Every lock-down script, every standard build, every function
turned off by default is an attempt to reduce the attack surface by
reducing the generality. The generality is where the complexity lives,
in exactly the same way that the Perl mantra ("there's always another
way") is why correctness for Perl can be no more than 'Did it work?'
XML.org is an OASIS Information Channel
sponsored by BEA Systems, Inc., IBM Corporation, Innodata Isogen, SAP AG and Sun
Microsystems, Inc.
Use http://www.oasis-open.org/mlmanage
to unsubscribe or change an email address. See http://xml.org/xml/news_market.shtml
for the list archives. |