|
|
| GeoCommunity Mailing List |
| |
| Mailing List Archives |
| Subject: | RE: GISList: Microsoft SQL Server Vs Oracle Spatial9i |
| Date: |
12/10/2002 03:29:33 PM |
| From: |
Dimitri Rotow |
|
|
> Could you please provide some more information to backup your statement > that an OGC standards based solution is obsolete and non-scalable? In > particular can you explain what is wrong with the OGC Simple > Features for SQL > specification?
Sure. The fundamental architectural mistake is to embed the "spatial" functionality within the DBMS. That means all geoprocessing is bottlenecked by the centralized DBMS. If you have 100 users who want to do geoprocessing, basically every one of those 100 users is time-sharing the central DBMS and the server that runs it.
A more modern approach is to distribute the geospatial computing within spatially aware clients and to use the DBMS as, basically, a file cabinet. In this scenario if you have 100 users doing geoprocessing you have the power of 100 machines available.
> > It seems to me that that a database vendor that implements built-in > support for SF-SQL should be able to achieve good performance. >
This is fine for simple retrieval of small, ad-hoc chunks, or display of things that are already done or computationally trivial tasks such as finding a handful of nearest points. It's too slow if people actually want to work with the data interactively to make sophisticated comparisons or edits that are non-local.
Let's take an example. Suppose you have a map of provinces, roads and city points and you also have a map of, say, coverage areas of cell antennas. You want to "cookie cut" the provinces, roads and city points with the antenna coverage areas to create a new map that has "cut out" provinces, roads and city points that fall only within antenna coverage areas. It's pretty silly to consider that from a performance perspective if all the necessary geometric and topological comparisons need to be mediated through SF-SQL with only one person doing such a job, but it's (geometrically?) much worse a proposition when more than more person does such things at the same time.
> > Please don't try to "prove" your point by demonstrating that some > implementations are slow ... instead please demonstrate that there can't > be a scalable and/or fast implementation. >
Well, I'd say that if all the known examples of a particular technology are frightfully slow at any sophisticated interactive usage the burden of proof is upon the partisans of that technology to show how it is scalable. :-) Be that as it may, I don't think in modern times anyone needs convincing that, all other things being equal, an architecture that time-shares a single CPU will not be as fast as an architecture that distributes the same computing job to multiple CPUs.
> > > It is also disingenuous to criticize (by implication, your use > of the phrase > > "locking you down") using a specific desktop GIS or middleware > package, as > > if welding yourself to OGC does not also "lock" you down to a > given set of > > constraints. > > I will conceed that sticking to a standards based approach has > some downsides > - for instance, the standard does not address many kinds of > geometry (2.5D, > topology). However, I feel it is relatively easy to have many different > client implementations all backed by one common corporate spatial > datastore
Absolutely, which is part of the appeal of such standards. The problem is that the clients are limited to relatively trivial things and lack the bandwidth (by virtue of the standard's decision to use a timeshared architecture instead of a distributed one) for doing sophisticated things.
Consider this thought experiement: Suppose we are using PhotoShop to do sophisticated graphics editing on an image with 50 million bytes of data or so. We're going to lay down a selection mask and then within all pixels affected by that mask (which could be topologically very complex) we are going to do a threshold by color or some other typically-PhotoShop visually complex operation that involves multiple layers.
Let's say we have 50 artists sitting at 50 desktop machines and each performs the above task. All the tasks run locally at high speed with the power of a processor for each user. No problems. After all, it's PhotoShop, right?
Now, let's imagine that someone has sold our company on the benefits of an "open" system where each image is stored at a pixel-per-record level within a "graphically" enabled DBMS. Our company did this so that our graphical data assets could be maintained in a centralized database where just about any client could interact with our graphics data (someone forgot that the main task was doing things with PhotoShop).
In this scenario, dumb clients interact with the graphically-enabled DBMS through GF-SQL. Whenever they need a pixel they get to it thr
|
|

Sponsored by:

For information regarding advertising rates Click Here!
|