You seem to be misinterpreting the role of OGC. Think of OGC as developing= =20 the equivalent of plug-and-play specifications, but instead of for hardware= =20 (CD-ROMS drives etc.) they are for (software) services.
If we substitute CD-ROM drives for OGC services, then perhaps you will see= =20 that what you are asking is not so relevant.
What is the speed or throughput of a plug-n-play CD-ROM drive versus a=20 non-p-n-p model? The answer doesn't have much to do with being=20 plug-n-play...one or the other could be faster irrespective....
Our research group began (today!) a small project to benchmark WMS and WFS= =20 (open source implementations) for speed in a wide range of situations (data= =20 volume, multi-user load, Linux vs. Windows, available RAM, etc.). The=20 implementations themselves are at the root of the actual performance=20 however, not whether or how they support OGC specs. We are interested in=20 the results to be able to coach future orgs wishing to include these=20 services in SDI projects: which combinations of machines, clients, services= =20 and data bases are appropriate for selected situations or use cases. In=20 all case we assume OGC specs, as this provides the ability to combine the=20 (heterogeneous) services in the first place.
M Gould
At 21:12 17/02/2005, Anthony Quartararo wrote: >Does anyone from OGC or anyone who deals with them have any publishable >metrics on interoperability performance ? Theory and concept are grand >ideals, but the reality of the situation is, what does the user(s)= community >experience. The internet-bubble era maxim still stands, if a user does not >get prompt, first-time, accurate service for a website, that user is gone >and very hard to get back. This goes to the heart of the OGC open= standards >initiative to promote the use of the technologies it fosters, but if the >performance is circa 1990 WWW, then why bother ? I would be interested, and >even challenge anyone to make performance metrics available for general >consumption on various scenarios of OGC-certified technology. I am talking >bits n bytes folks, network latency, refresh rates, network parameters,= test >data used (raster/vector), analytical vs. static interaction. To my >knowledge, there is nothing like this published on OGC website, nor by any >companies involved in OGC standards. Why not ? I cannot believe that= anyone >forgets to do this type of testing, so, let's see the results please.... > >Oh, and I know alot of factors go into these metrics, but something, >anything would be helpful to assess the veracity and efficacy of all this >OGC work, no ? > >Anthony >_______________________________________________ >gislist mailing list >gislist@lists.geocomm.com >http://lists.geocomm.com/mailman/listinfo/gislist > >_________________________________ >This list is brought to you by >The GeoCommunity >http://www.geocomm.com/ > >Get Access to the latest GIS & Geospatial Industry RFPs and bids >http://www.geobids.com
----------------------- Michael Gould Information Systems Department (Lenguajes y Sistemas Inform=E1ticos) Universitat Jaume I E-12071 Castell=F3n, Spain http://www.mgould.com GeoInfo group http://www.geoinfo.uji.es and TeIDE SDI consortium http://redgeomatica.rediris.es/teide/ 2005 Vespucci summer school http://www.vespucci.org
_______________________________________________ gislist mailing list gislist@lists.geocomm.com http://lists.geocomm.com/mailman/listinfo/gislist
_________________________________ This list is brought to you by The GeoCommunity http://www.geocomm.com/
Get Access to the latest GIS & Geospatial Industry RFPs and bids http://www.geobids.com
|