He sees a problem with how Semantic Web Technologies (SWTs) have typically been applied by adding layers alongside existing ones. This just increases "the number of interfaces and mediations required". Furthermore, most publications using SWTs talk about homogeneous environments - languages, ontologies, definitions are all constrained and any differences avoided. The other mistake highlighted is that the work has been divided into the usual architectural layers (persistence, processes, UI, etc) which has lead to each of these layers having their own Semantic Web layer added - creating "a disaster for software architects and engineers who must use the results from several communities in building software applications that are hosted and interconnected".
There are two obvious ways you could attack this argument. The first is that these types of constraints have been applied because of the immaturity of the underlying systems. It's hard to develop an integrated system in one go - dividing up these problems into their layers is obviously one way to make progress. There is also obvious infrastructure missing, not just better triple stores, but also ways to make sure you can reuse ontologies and processes. The other is that there are many examples of aligning ontologies, reusing them, and merging concepts from them; but it too is still in its infancy. Areas like data mining, that the Semantic Web could leverage, lacks mainstream use as well.
Straight ahead from here leads to more SWT languages, hard-to-integrate ontologies, and technology components such as libraries, RDF databases, and logic reasoners. Those who build real-world applications will have to integrate all those elements to use them holistically, thus leaving the integration problem unresolved. As this approach increases the effort required in every part of the software engineering life cycle, chances are that developers will adopt the SWT only for very specific areas and solutions, rather than for general use across all domains in which computing is applied.
He offers a possible solution in addressing the data and process heterogeneity by not restricting SWT to the edges of systems but for it to be applied throughout systems.
Rather than looking at SWT as interface-wrapping technology, it seems appropriate to make it the foundation for all aspects of information technology and scientific computing. In concrete terms, one way to eliminate mediations when crossing layers is to ensure that data objects are encoded in a single format (such as RDF) and not mapped between layers but rather handed over from layer to layer without change. This, in turn, would challenge the various technologies used for implementing these layers to become totally SWT aware.
I think that development goes through cycles of integration and separation but I do agree that if the Semantic Web is just a technology of wrappers it will fail.
Update: Much along similar lines is an article about Dieter Fensel, "Are Semantic Researchers Missing the Big Picture?", he says:
...we do a lot on research of apply[ing] in semantics to all aspects of Enterprise Application Integration where you integrate data, processes, and services (and not only web pages)...
Is the industry neglecting the greater overall goals of scalability for interoperability?
“No,” writes Fensel. “I think they are aware of [it]. For example, Michael Broodie, Scientific Director at Verizon, estimates that world wide around 1 trillion dollars are spent per annumn on application integration. The semantic web community (and not the industry) is mostly ignoring this area.”
I do like the cycle though, we've gone from an initial SEMANTIC web and criticism, to semantic WEB and now this criticism and back to highlighting semantic again.