XML and Web Services In The News - 22 August 2006
Provided by OASIS |
Edited by Robin Cover
This issue of XML Daily Newslink is sponsored by SAP
HEADLINES:
W3C Working Draft: Web Applications Packaging Format Requirements
Marcos Caceres (ed), W3C Technical Report
W3C's Web Application Formats Working Group has released a First Public
Working Draft for "Web Applications Packaging Format Requirements." The
type of Web applications addressed by this document are usually small
client-side applications for displaying and updating remote data,
packaged in a way to allow a single download and installation on a client
machine. The application may execute outside of the typical Web browser
interface. Examples include clocks, stock tickers, news casters, games
and weather forecasters. Some existing industry solutions go by the
names "widgets", "gadgets" or "modules". Application Packaging is the
process of bundling an application and its resources into an archive
format (e.g. a '.zip' file for the purpose of distribution and
deployment. A package bundle usually includes a manifest, which is a
set of instructions that tell a host runtime environment how to install
and run the packaged application. Application packaging is used on both
the server-side, as is the case with Sun's JEE .war files and .ear files
and Microsoft's .NET .cab files, and on the client-side, as is the case
with widgets such as those distributed by Apple, Opera, Yahoo! and
Microsoft. Currently, there is no standardized way to package an
application for distribution and deployment on the web. Each vendor has
come up with their own solution to what is essentially the same problem.
The working group hopes that by standardising application packaging
authors will be able to distribute and deploy their applications across
a variety of platforms in a standardized manner that is both easy to use
and device independent.
See also: the W3C news item
What Data Is Your Metadata About, and Where Is It?
Bob DuCharme, bobdc.blog
Some people are doing valuable work with pure metadata about medical
conditions and potential treatments as they use RDF/OWL tools to find
new relationships, but I think too many are designing metadata for
nonexistent data that they somehow think they will inspire someone else
to create. In typical discussions about the lack of RDF data on the web,
some people point out the progress in the development of tools that
let us treat non-RDF as triples, thereby adding this data to a potential
semantic web. I think that this is great, but what I'd really like to
see is RDF/OWL ontologies that describe this data so that we can get
more value from that data. As with many IT projects, starting with a
body of existing data and then creating a model that works well with
it is messier and more difficult than starting with a blank slate, but
from the potential semantic web to the internal systems of many, many,
companies, the greatest opportunities for the use of metadata are in
building metadata around existing data. In forthcoming postings here,
I'll write about (or, more likely, ask about) the creation of RDF/OWL
ontologies for existing sets of data and how those ontologies add value
to that data.
Architectural Manifesto: The future of Mobile Web Services
Mikko Kontio, IBM developerWorks
A Web service is a software system that has an interface and is
designed to work in machine-to-machine environments. In a Web services
framework, one application calls another application using an interface.
The interface is described using WSDL so that computers can uniformly
process it. In the simplest form, a Web service requester sends an HTTP
request to a Web service provider. The HTTP request is in XML format
and the service request is in an XML string, in the format of the
service's WSDL. Upon receiving a request, the service answers it by
sending an XML string in an HTTP response. All of this falls within
the normal HTTP request-response paradigm, the only difference being
that the request and response are XML strings. The Web services model
has been around for awhile, but developers and companies are only just
starting to figure out how to leverage it. Companies like Flickr and
Google have made fast progress with public services that challenge
developers to innovate, all to the company's benefit. Private services
work on a different model but offer equally compelling benefits.
Packaging server-based enterprise applications as Web services enables
users to access data and functions in ways not imaginable when the
applications were first developed. This is good news for everyone, but
especially for mobile developers.
See also: W3C Mobile Web Initiative
Law Enforcement Agencies Explore Semantics
Dibya Sarkar, Federal Computer Week
Semantic technology is poised to become the next evolutionary step
in helping law enforcement agencies automatically analyze and collect
pertinent information on suspects and criminals from a wide range of
data sources. Experts say the use of semantic technology is growing
among consultants and application developers. The World Wide Web
Consortium's adoption of two semantic standards — the Resource
Description Framework (RDF) and Web Ontology Language (OWL) — has
further spurred the use of the technology in the past two years.
Although the intelligence community is probably the most advanced in
using such tools, experts note that deployment in other sectors is
still sporadic. Paul Wormeli, executive director of the Integrated
Justice Information Systems Institute, a nonprofit organization that
prompts the technology industry to develop new standards and practices
in the public safety sector, said several companies are beginning to
deploy semantic technology, but it is still new to state and local
law enforcement agencies. He said law enforcement officials are still
struggling with implementing Extensible Markup Language-based messaging
standards such as the Global Justice XML Data Model, and 200 similar
projects are probably under way. Mike Kinkead, chief executive officer
at Metatomix, based in Waltham, Mass., said the company has developed
and deployed several modules using semantic technology and the RDF
and OWL standards specifically for law enforcement and justice
agencies. He said the technology acts more like a sophisticated human
analyst than a program.
See also: W3C Semantic Web
Massachusetts to Release ODF Update
Martin LaMonica, CNET News.com Blog
The Massachusetts Information Technology Division on Wednesday is
scheduled to send a letter to disability advocacy groups to address
accessibility and the state's move to the OpenDocument format, according
to a government spokesperson. The letter will be called a mid-year
assessment on ODF and will address accessibility, said Felix Browne,
a spokesman for the administration of Governor Mitt Romney said on
Tuesday. The Information Technology Division (ITD), part of the state's
executive branch, has caught international attention for its decision
to save documents in the OpenDocument format, or ODF, by January, 2007.
The ITD had been planning on releasing a mid-year assessment on its ODF
implementation this summer in conjunction with Secretary of
Administration and Finance. In early July [2006], Louis Gutierrez said
that the assessment would address the question of accessibility for
people with disabilities and the timeline for implementation. The state
has also engaged consulting firm EDS to do a five year cost-benefit
analysis on the moving to ODF. On Friday last week, Gutierrez met with
people who represent disability groups to share the contents of the
letter, according to one person familiar with the meeting. State IT
officials have come under harsh criticism from disabilities groups for
not adequately accessibility in its ODF policy. This approach, which
Gutierrez called promising in July, would allow people with
disabilities to continue using accessibility tools optimized for
Microsoft Office, rather than less mature open-source productivity
suites which support OpenDocument.
Trends in Cyberinfrastructure for Bioinformatics and Computational Biology
Rick Stevens, CT Watch Quarterly
Probably the most important trend in modern biology is the increasing
availability of high-throughput (HT) data. The earliest forms of HT
were genome sequences, and to a lesser degree, protein sequences,
however now many forms of biological data are available via automated
or semi-automated experimental systems. This data includes gene
expression data, protein expression, metabolomics, mass spec data,
imaging of all sorts, protein structures and the results of mutagenesis
and screening experiments conducted in parallel. So an increasing
quantity and diversity of data are major trends. To gain biological
meaning from this data it is required that this data be integrated
(finding and constructing correspondences between elements) and that it
be curated (checked for errors, linked to the literature and previous
results and organized). The challenges in producing high-quality,
integrated datasets are immense and long term. The second trend is the
general acceleration of the pace of asking those questions that can be
answered by computation and by HT experiments. Using the computer, a
researcher can be 10 or 100 times more efficient than by using wet lab
experiments alone. Bioinformatics can identify the critical experiments
necessary to address a specific question of interest. Thus the biologist
that is able to leverage bioinformatics is in a fundamentally different
performance regime that those that can't.
Publishers Fight Back Against Google with New Book Search Service
Steve Bryant, eWEEK
Publishers who want to make their books searchable online but aren't
comfortable with Google Book Search now have another option. Publisher
HarperCollins and Austin, Texas-based LibreDigital announced today a
hosted service called LibreDigital Warehouse that will give publishers
and booksellers the ability to deliver searchable book content on their
own Web sites. Like Google Book Search, the service will allow users
to search the entire content of a book and preview a percentage of its
text and illustrations. Unlike Google, LibreDigital Warehouse allows
publishers to customize which pages a user can view, which pages are
always prohibited from viewing (such as the last three pages of a
novel), and what overall percentage of a book is viewable. Publishers
can customize these rules per title and per partner. LibreDigital
Warehouse will offer 160 to 200 HarperCollins titles initially.
HarperCollins plans for the database to eventually include up to 10,000
titles. HarperCollins is currently the only participating publisher,
but the program has received a "warm welcome" from other publishers
who are also interested in participating, according to LibreDigital.
LibreDigital is a division of Newstand, which provides exact digital
duplicates (layout included) of newspapers such as the New York Times
and USA Today. Miller says that Newstand, in business since 1999, has
more experience with book scanning and digital rights management, and
their process is superior to Google's.
XML.org is an OASIS Information Channel
sponsored by BEA Systems, Inc., IBM Corporation, Innodata Isogen, SAP AG and Sun
Microsystems, Inc.
Use http://www.oasis-open.org/mlmanage
to unsubscribe or change an email address. See http://xml.org/xml/news_market.shtml
for the list archives. |